Test Report: Docker_Linux_crio_arm64 18771

                    
                      d8f44c85dc50f37f8a74f4a275902bf69829aaa8:2024-04-29:34254
                    
                

Test fail (4/321)

Order failed test Duration
30 TestAddons/parallel/Ingress 167.42
32 TestAddons/parallel/MetricsServer 347.86
173 TestMultiControlPlane/serial/RestartCluster 127.45
275 TestPause/serial/SecondStartNoReconfiguration 35.1
x
+
TestAddons/parallel/Ingress (167.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-457090 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-457090 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-457090 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2baf6444-6386-438e-97a8-1d833bfb662c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2baf6444-6386-438e-97a8-1d833bfb662c] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004159035s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-457090 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-457090 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.143488812s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-457090 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-457090 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.067174384s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-457090 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-457090 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-457090 addons disable ingress --alsologtostderr -v=1: (7.715684425s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-457090
helpers_test.go:235: (dbg) docker inspect addons-457090:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bd12d3ace1bb99c97c85534c8adee0c896b84d4633e8fc4f8238ef3baef89283",
	        "Created": "2024-04-29T14:07:15.493652234Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1903788,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-29T14:07:15.817386752Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c9315e0f61546d7b9630cf89252fa7f614fc966830e816cca5333df5c944376f",
	        "ResolvConfPath": "/var/lib/docker/containers/bd12d3ace1bb99c97c85534c8adee0c896b84d4633e8fc4f8238ef3baef89283/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bd12d3ace1bb99c97c85534c8adee0c896b84d4633e8fc4f8238ef3baef89283/hostname",
	        "HostsPath": "/var/lib/docker/containers/bd12d3ace1bb99c97c85534c8adee0c896b84d4633e8fc4f8238ef3baef89283/hosts",
	        "LogPath": "/var/lib/docker/containers/bd12d3ace1bb99c97c85534c8adee0c896b84d4633e8fc4f8238ef3baef89283/bd12d3ace1bb99c97c85534c8adee0c896b84d4633e8fc4f8238ef3baef89283-json.log",
	        "Name": "/addons-457090",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-457090:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-457090",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/811dc4453d69936d5896874dd5f4e4478c0e9e73b97f44bd0e82eb46ac761c9c-init/diff:/var/lib/docker/overlay2/f080d6ed1efba2dbfce916f4260b407bf4d9204079d2708eb1c14f6847e489ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/811dc4453d69936d5896874dd5f4e4478c0e9e73b97f44bd0e82eb46ac761c9c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/811dc4453d69936d5896874dd5f4e4478c0e9e73b97f44bd0e82eb46ac761c9c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/811dc4453d69936d5896874dd5f4e4478c0e9e73b97f44bd0e82eb46ac761c9c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-457090",
	                "Source": "/var/lib/docker/volumes/addons-457090/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-457090",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-457090",
	                "name.minikube.sigs.k8s.io": "addons-457090",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "079ddbe097cb5488e31811a4f7eaae32442e92a52f31f1ade40b3f25af515dcd",
	            "SandboxKey": "/var/run/docker/netns/079ddbe097cb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35042"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35041"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35038"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35040"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35039"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-457090": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "51179890997c9a35c5370f94d300b54c7cfc97355ada9f1fe12d84336c5bf2eb",
	                    "EndpointID": "24ec836fd989eee17d7df21cca7817d54bc7ed86503c52745596c1a4a655b584",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-457090",
	                        "bd12d3ace1bb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-457090 -n addons-457090
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-457090 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-457090 logs -n 25: (1.434478421s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC | 29 Apr 24 14:06 UTC |
	| delete  | -p download-only-605899                                                                     | download-only-605899   | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC | 29 Apr 24 14:06 UTC |
	| delete  | -p download-only-668091                                                                     | download-only-668091   | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC | 29 Apr 24 14:06 UTC |
	| delete  | -p download-only-605899                                                                     | download-only-605899   | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC | 29 Apr 24 14:06 UTC |
	| start   | --download-only -p                                                                          | download-docker-259064 | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC |                     |
	|         | download-docker-259064                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-259064                                                                   | download-docker-259064 | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC | 29 Apr 24 14:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-349287   | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC |                     |
	|         | binary-mirror-349287                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36983                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-349287                                                                     | binary-mirror-349287   | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC | 29 Apr 24 14:06 UTC |
	| addons  | enable dashboard -p                                                                         | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC |                     |
	|         | addons-457090                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC |                     |
	|         | addons-457090                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-457090 --wait=true                                                                | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC | 29 Apr 24 14:10 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-457090 ip                                                                            | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:10 UTC | 29 Apr 24 14:10 UTC |
	| addons  | addons-457090 addons disable                                                                | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:10 UTC | 29 Apr 24 14:10 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:10 UTC | 29 Apr 24 14:10 UTC |
	|         | -p addons-457090                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-457090 ssh cat                                                                       | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:11 UTC | 29 Apr 24 14:11 UTC |
	|         | /opt/local-path-provisioner/pvc-d73e47b3-72c4-4752-8811-fa0e3b0dd658_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-457090 addons disable                                                                | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:11 UTC | 29 Apr 24 14:11 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-457090 addons                                                                        | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:11 UTC | 29 Apr 24 14:11 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-457090 addons                                                                        | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:11 UTC | 29 Apr 24 14:11 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:11 UTC | 29 Apr 24 14:11 UTC |
	|         | addons-457090                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:11 UTC | 29 Apr 24 14:11 UTC |
	|         | -p addons-457090                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:11 UTC | 29 Apr 24 14:12 UTC |
	|         | addons-457090                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-457090 ssh curl -s                                                                   | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:12 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-457090 ip                                                                            | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:14 UTC | 29 Apr 24 14:14 UTC |
	| addons  | addons-457090 addons disable                                                                | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:14 UTC | 29 Apr 24 14:14 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-457090 addons disable                                                                | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:14 UTC | 29 Apr 24 14:14 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 14:06:51
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 14:06:51.726047 1903322 out.go:291] Setting OutFile to fd 1 ...
	I0429 14:06:51.726231 1903322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:06:51.726266 1903322 out.go:304] Setting ErrFile to fd 2...
	I0429 14:06:51.726284 1903322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:06:51.726656 1903322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-1897267/.minikube/bin
	I0429 14:06:51.727303 1903322 out.go:298] Setting JSON to false
	I0429 14:06:51.728883 1903322 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":35356,"bootTime":1714364256,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0429 14:06:51.729055 1903322 start.go:139] virtualization:  
	I0429 14:06:51.732025 1903322 out.go:177] * [addons-457090] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0429 14:06:51.734936 1903322 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 14:06:51.736883 1903322 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 14:06:51.735005 1903322 notify.go:220] Checking for updates...
	I0429 14:06:51.740423 1903322 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18771-1897267/kubeconfig
	I0429 14:06:51.742403 1903322 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-1897267/.minikube
	I0429 14:06:51.744292 1903322 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0429 14:06:51.746034 1903322 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 14:06:51.748274 1903322 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 14:06:51.768336 1903322 docker.go:122] docker version: linux-26.1.0:Docker Engine - Community
	I0429 14:06:51.768453 1903322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:06:51.832407 1903322 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-29 14:06:51.82274862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:06:51.832518 1903322 docker.go:295] overlay module found
	I0429 14:06:51.834657 1903322 out.go:177] * Using the docker driver based on user configuration
	I0429 14:06:51.836572 1903322 start.go:297] selected driver: docker
	I0429 14:06:51.836590 1903322 start.go:901] validating driver "docker" against <nil>
	I0429 14:06:51.836602 1903322 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 14:06:51.837285 1903322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:06:51.890240 1903322 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-29 14:06:51.88171619 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:06:51.890427 1903322 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 14:06:51.890648 1903322 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 14:06:51.893056 1903322 out.go:177] * Using Docker driver with root privileges
	I0429 14:06:51.894893 1903322 cni.go:84] Creating CNI manager for ""
	I0429 14:06:51.894913 1903322 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 14:06:51.894922 1903322 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 14:06:51.895005 1903322 start.go:340] cluster config:
	{Name:addons-457090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-457090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 14:06:51.897389 1903322 out.go:177] * Starting "addons-457090" primary control-plane node in "addons-457090" cluster
	I0429 14:06:51.899197 1903322 cache.go:121] Beginning downloading kic base image for docker with crio
	I0429 14:06:51.901180 1903322 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 14:06:51.903373 1903322 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 14:06:51.903504 1903322 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 14:06:51.903535 1903322 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4
	I0429 14:06:51.903545 1903322 cache.go:56] Caching tarball of preloaded images
	I0429 14:06:51.903612 1903322 preload.go:173] Found /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0429 14:06:51.903628 1903322 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 14:06:51.903968 1903322 profile.go:143] Saving config to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/config.json ...
	I0429 14:06:51.903995 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/config.json: {Name:mkedaaf14e5e59422442c581aac85e090158d002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:06:51.917232 1903322 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0429 14:06:51.917356 1903322 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory
	I0429 14:06:51.917382 1903322 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory, skipping pull
	I0429 14:06:51.917391 1903322 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in cache, skipping pull
	I0429 14:06:51.917404 1903322 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e as a tarball
	I0429 14:06:51.917414 1903322 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e from local cache
	I0429 14:07:08.704420 1903322 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e from cached tarball
	I0429 14:07:08.704458 1903322 cache.go:194] Successfully downloaded all kic artifacts
	I0429 14:07:08.704495 1903322 start.go:360] acquireMachinesLock for addons-457090: {Name:mk348a5f4a64954a7fbc72594b4980ed5c9598c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 14:07:08.704614 1903322 start.go:364] duration metric: took 95.187µs to acquireMachinesLock for "addons-457090"
	I0429 14:07:08.704655 1903322 start.go:93] Provisioning new machine with config: &{Name:addons-457090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-457090 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 14:07:08.704747 1903322 start.go:125] createHost starting for "" (driver="docker")
	I0429 14:07:08.707668 1903322 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0429 14:07:08.707908 1903322 start.go:159] libmachine.API.Create for "addons-457090" (driver="docker")
	I0429 14:07:08.707953 1903322 client.go:168] LocalClient.Create starting
	I0429 14:07:08.708066 1903322 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem
	I0429 14:07:09.072840 1903322 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem
	I0429 14:07:09.873399 1903322 cli_runner.go:164] Run: docker network inspect addons-457090 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 14:07:09.888636 1903322 cli_runner.go:211] docker network inspect addons-457090 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 14:07:09.888730 1903322 network_create.go:281] running [docker network inspect addons-457090] to gather additional debugging logs...
	I0429 14:07:09.888752 1903322 cli_runner.go:164] Run: docker network inspect addons-457090
	W0429 14:07:09.902648 1903322 cli_runner.go:211] docker network inspect addons-457090 returned with exit code 1
	I0429 14:07:09.902676 1903322 network_create.go:284] error running [docker network inspect addons-457090]: docker network inspect addons-457090: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-457090 not found
	I0429 14:07:09.902689 1903322 network_create.go:286] output of [docker network inspect addons-457090]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-457090 not found
	
	** /stderr **
	I0429 14:07:09.902807 1903322 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 14:07:09.918453 1903322 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002545150}
	I0429 14:07:09.918494 1903322 network_create.go:124] attempt to create docker network addons-457090 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0429 14:07:09.918549 1903322 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-457090 addons-457090
	I0429 14:07:09.974964 1903322 network_create.go:108] docker network addons-457090 192.168.49.0/24 created
	I0429 14:07:09.974995 1903322 kic.go:121] calculated static IP "192.168.49.2" for the "addons-457090" container
	I0429 14:07:09.975079 1903322 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 14:07:09.988436 1903322 cli_runner.go:164] Run: docker volume create addons-457090 --label name.minikube.sigs.k8s.io=addons-457090 --label created_by.minikube.sigs.k8s.io=true
	I0429 14:07:10.016750 1903322 oci.go:103] Successfully created a docker volume addons-457090
	I0429 14:07:10.016862 1903322 cli_runner.go:164] Run: docker run --rm --name addons-457090-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-457090 --entrypoint /usr/bin/test -v addons-457090:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 14:07:11.324368 1903322 cli_runner.go:217] Completed: docker run --rm --name addons-457090-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-457090 --entrypoint /usr/bin/test -v addons-457090:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib: (1.307466693s)
	I0429 14:07:11.324402 1903322 oci.go:107] Successfully prepared a docker volume addons-457090
	I0429 14:07:11.324446 1903322 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 14:07:11.324468 1903322 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 14:07:11.324544 1903322 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-457090:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 14:07:15.432945 1903322 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-457090:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.108361242s)
	I0429 14:07:15.432977 1903322 kic.go:203] duration metric: took 4.108504822s to extract preloaded images to volume ...
	W0429 14:07:15.433112 1903322 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0429 14:07:15.433234 1903322 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0429 14:07:15.480396 1903322 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-457090 --name addons-457090 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-457090 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-457090 --network addons-457090 --ip 192.168.49.2 --volume addons-457090:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e
	I0429 14:07:15.826833 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Running}}
	I0429 14:07:15.847470 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:15.869515 1903322 cli_runner.go:164] Run: docker exec addons-457090 stat /var/lib/dpkg/alternatives/iptables
	I0429 14:07:15.935703 1903322 oci.go:144] the created container "addons-457090" has a running status.
	I0429 14:07:15.935738 1903322 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa...
	I0429 14:07:16.511966 1903322 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0429 14:07:16.544784 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:16.562221 1903322 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0429 14:07:16.562245 1903322 kic_runner.go:114] Args: [docker exec --privileged addons-457090 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0429 14:07:16.624195 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:16.643295 1903322 machine.go:94] provisionDockerMachine start ...
	I0429 14:07:16.643401 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:16.663210 1903322 main.go:141] libmachine: Using SSH client type: native
	I0429 14:07:16.663482 1903322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35042 <nil> <nil>}
	I0429 14:07:16.663490 1903322 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 14:07:16.791991 1903322 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-457090
	
	I0429 14:07:16.792059 1903322 ubuntu.go:169] provisioning hostname "addons-457090"
	I0429 14:07:16.792152 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:16.814630 1903322 main.go:141] libmachine: Using SSH client type: native
	I0429 14:07:16.814958 1903322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35042 <nil> <nil>}
	I0429 14:07:16.814975 1903322 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-457090 && echo "addons-457090" | sudo tee /etc/hostname
	I0429 14:07:16.961204 1903322 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-457090
	
	I0429 14:07:16.961333 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:16.977684 1903322 main.go:141] libmachine: Using SSH client type: native
	I0429 14:07:16.977930 1903322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35042 <nil> <nil>}
	I0429 14:07:16.977952 1903322 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-457090' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-457090/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-457090' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 14:07:17.104776 1903322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 14:07:17.104816 1903322 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18771-1897267/.minikube CaCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18771-1897267/.minikube}
	I0429 14:07:17.104843 1903322 ubuntu.go:177] setting up certificates
	I0429 14:07:17.104852 1903322 provision.go:84] configureAuth start
	I0429 14:07:17.104922 1903322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-457090
	I0429 14:07:17.124766 1903322 provision.go:143] copyHostCerts
	I0429 14:07:17.124852 1903322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem (1078 bytes)
	I0429 14:07:17.124980 1903322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem (1123 bytes)
	I0429 14:07:17.125049 1903322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem (1679 bytes)
	I0429 14:07:17.125105 1903322 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem org=jenkins.addons-457090 san=[127.0.0.1 192.168.49.2 addons-457090 localhost minikube]
	I0429 14:07:17.501573 1903322 provision.go:177] copyRemoteCerts
	I0429 14:07:17.501655 1903322 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 14:07:17.501707 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:17.519353 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:17.609688 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 14:07:17.634692 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 14:07:17.658205 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 14:07:17.681793 1903322 provision.go:87] duration metric: took 576.926595ms to configureAuth
	I0429 14:07:17.681824 1903322 ubuntu.go:193] setting minikube options for container-runtime
	I0429 14:07:17.682008 1903322 config.go:182] Loaded profile config "addons-457090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:07:17.682114 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:17.697657 1903322 main.go:141] libmachine: Using SSH client type: native
	I0429 14:07:17.697906 1903322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35042 <nil> <nil>}
	I0429 14:07:17.697926 1903322 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 14:07:17.931541 1903322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 14:07:17.931568 1903322 machine.go:97] duration metric: took 1.288248315s to provisionDockerMachine
	I0429 14:07:17.931578 1903322 client.go:171] duration metric: took 9.223615798s to LocalClient.Create
	I0429 14:07:17.931592 1903322 start.go:167] duration metric: took 9.223684951s to libmachine.API.Create "addons-457090"
	I0429 14:07:17.931599 1903322 start.go:293] postStartSetup for "addons-457090" (driver="docker")
	I0429 14:07:17.931610 1903322 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 14:07:17.931671 1903322 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 14:07:17.931718 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:17.948582 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:18.039110 1903322 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 14:07:18.042932 1903322 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0429 14:07:18.042990 1903322 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0429 14:07:18.043003 1903322 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0429 14:07:18.043018 1903322 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0429 14:07:18.043033 1903322 filesync.go:126] Scanning /home/jenkins/minikube-integration/18771-1897267/.minikube/addons for local assets ...
	I0429 14:07:18.043116 1903322 filesync.go:126] Scanning /home/jenkins/minikube-integration/18771-1897267/.minikube/files for local assets ...
	I0429 14:07:18.043155 1903322 start.go:296] duration metric: took 111.549816ms for postStartSetup
	I0429 14:07:18.043524 1903322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-457090
	I0429 14:07:18.059907 1903322 profile.go:143] Saving config to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/config.json ...
	I0429 14:07:18.060217 1903322 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 14:07:18.060293 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:18.078692 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:18.165599 1903322 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 14:07:18.169873 1903322 start.go:128] duration metric: took 9.465111051s to createHost
	I0429 14:07:18.169895 1903322 start.go:83] releasing machines lock for "addons-457090", held for 9.465267489s
	I0429 14:07:18.169962 1903322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-457090
	I0429 14:07:18.185397 1903322 ssh_runner.go:195] Run: cat /version.json
	I0429 14:07:18.185448 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:18.185478 1903322 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 14:07:18.185530 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:18.206866 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:18.208166 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:18.292061 1903322 ssh_runner.go:195] Run: systemctl --version
	I0429 14:07:18.296927 1903322 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 14:07:18.460392 1903322 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 14:07:18.464530 1903322 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 14:07:18.486628 1903322 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0429 14:07:18.486704 1903322 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 14:07:18.515812 1903322 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0429 14:07:18.515834 1903322 start.go:494] detecting cgroup driver to use...
	I0429 14:07:18.515864 1903322 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0429 14:07:18.515933 1903322 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 14:07:18.532069 1903322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 14:07:18.545326 1903322 docker.go:217] disabling cri-docker service (if available) ...
	I0429 14:07:18.545391 1903322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 14:07:18.560555 1903322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 14:07:18.576103 1903322 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 14:07:18.677657 1903322 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 14:07:18.782118 1903322 docker.go:233] disabling docker service ...
	I0429 14:07:18.782185 1903322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 14:07:18.802358 1903322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 14:07:18.814464 1903322 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 14:07:18.899110 1903322 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 14:07:18.996822 1903322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 14:07:19.010748 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 14:07:19.027568 1903322 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 14:07:19.027637 1903322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:07:19.038137 1903322 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 14:07:19.038245 1903322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:07:19.047772 1903322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:07:19.057458 1903322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:07:19.067392 1903322 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 14:07:19.076463 1903322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:07:19.085874 1903322 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:07:19.101173 1903322 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:07:19.110962 1903322 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 14:07:19.119965 1903322 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 14:07:19.128360 1903322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 14:07:19.216325 1903322 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 14:07:19.332002 1903322 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 14:07:19.332145 1903322 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 14:07:19.335814 1903322 start.go:562] Will wait 60s for crictl version
	I0429 14:07:19.335880 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:07:19.339443 1903322 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 14:07:19.381985 1903322 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0429 14:07:19.382095 1903322 ssh_runner.go:195] Run: crio --version
	I0429 14:07:19.421468 1903322 ssh_runner.go:195] Run: crio --version
	I0429 14:07:19.471680 1903322 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.24.6 ...
	I0429 14:07:19.473838 1903322 cli_runner.go:164] Run: docker network inspect addons-457090 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 14:07:19.489193 1903322 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0429 14:07:19.492958 1903322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 14:07:19.504081 1903322 kubeadm.go:877] updating cluster {Name:addons-457090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-457090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 14:07:19.504207 1903322 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 14:07:19.504268 1903322 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 14:07:19.580861 1903322 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 14:07:19.580883 1903322 crio.go:433] Images already preloaded, skipping extraction
	I0429 14:07:19.580937 1903322 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 14:07:19.620228 1903322 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 14:07:19.620252 1903322 cache_images.go:84] Images are preloaded, skipping loading
	I0429 14:07:19.620261 1903322 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.0 crio true true} ...
	I0429 14:07:19.620354 1903322 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-457090 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-457090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 14:07:19.620443 1903322 ssh_runner.go:195] Run: crio config
	I0429 14:07:19.667700 1903322 cni.go:84] Creating CNI manager for ""
	I0429 14:07:19.667724 1903322 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 14:07:19.667741 1903322 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 14:07:19.667763 1903322 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-457090 NodeName:addons-457090 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 14:07:19.667916 1903322 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-457090"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 14:07:19.667985 1903322 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 14:07:19.676744 1903322 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 14:07:19.676811 1903322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 14:07:19.685267 1903322 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0429 14:07:19.702664 1903322 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 14:07:19.720185 1903322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0429 14:07:19.737727 1903322 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0429 14:07:19.740977 1903322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 14:07:19.751643 1903322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 14:07:19.840633 1903322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 14:07:19.854203 1903322 certs.go:68] Setting up /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090 for IP: 192.168.49.2
	I0429 14:07:19.854276 1903322 certs.go:194] generating shared ca certs ...
	I0429 14:07:19.854305 1903322 certs.go:226] acquiring lock for ca certs: {Name:mk012c6865f9f1625b7bfd5d0280b6707793520e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:19.854462 1903322 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key
	I0429 14:07:20.329665 1903322 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt ...
	I0429 14:07:20.329703 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt: {Name:mka2019fbfe59146662f34b9c21b1924ee4d4781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:20.329951 1903322 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key ...
	I0429 14:07:20.329968 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key: {Name:mk0778f4bf44036cace3ccb43916ea03bd13d929 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:20.330063 1903322 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key
	I0429 14:07:20.761699 1903322 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.crt ...
	I0429 14:07:20.761736 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.crt: {Name:mk636a2913a13527b6a821d0a19482cdb8456da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:20.761932 1903322 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key ...
	I0429 14:07:20.761946 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key: {Name:mk8a61b3fea9bc6c8b23591c0561875cabea7997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:20.762032 1903322 certs.go:256] generating profile certs ...
	I0429 14:07:20.762098 1903322 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.key
	I0429 14:07:20.762119 1903322 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt with IP's: []
	I0429 14:07:21.023439 1903322 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt ...
	I0429 14:07:21.023471 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: {Name:mk52bba284d9b76dabfc3f7a15a199308f6ebebe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:21.023663 1903322 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.key ...
	I0429 14:07:21.023676 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.key: {Name:mkb948f73d775d0f71a7f77dd796aca72d0a0e47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:21.023763 1903322 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.key.0cec27d5
	I0429 14:07:21.023785 1903322 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.crt.0cec27d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0429 14:07:21.495009 1903322 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.crt.0cec27d5 ...
	I0429 14:07:21.495043 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.crt.0cec27d5: {Name:mkbeccdc1710362174910617f4bca97d1c55e709 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:21.495251 1903322 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.key.0cec27d5 ...
	I0429 14:07:21.495270 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.key.0cec27d5: {Name:mkbd1fb5674880c1d08aa291a724907ba2c49844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:21.495362 1903322 certs.go:381] copying /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.crt.0cec27d5 -> /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.crt
	I0429 14:07:21.495449 1903322 certs.go:385] copying /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.key.0cec27d5 -> /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.key
	I0429 14:07:21.495504 1903322 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/proxy-client.key
	I0429 14:07:21.495526 1903322 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/proxy-client.crt with IP's: []
	I0429 14:07:21.940445 1903322 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/proxy-client.crt ...
	I0429 14:07:21.940479 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/proxy-client.crt: {Name:mk7cfbd8e5ee4155ae9c21eb6f1f17142ba58dac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:21.940692 1903322 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/proxy-client.key ...
	I0429 14:07:21.940708 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/proxy-client.key: {Name:mk1e89081eb29f83f8c3a45d37d3ea69612ced43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:21.940934 1903322 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 14:07:21.940980 1903322 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem (1078 bytes)
	I0429 14:07:21.941005 1903322 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem (1123 bytes)
	I0429 14:07:21.941032 1903322 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem (1679 bytes)
	I0429 14:07:21.941687 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 14:07:21.967017 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 14:07:21.991865 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 14:07:22.020990 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 14:07:22.046811 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0429 14:07:22.071941 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 14:07:22.096452 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 14:07:22.120823 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 14:07:22.145451 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 14:07:22.172925 1903322 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 14:07:22.193804 1903322 ssh_runner.go:195] Run: openssl version
	I0429 14:07:22.202234 1903322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 14:07:22.212459 1903322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:07:22.216222 1903322 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 14:07 /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:07:22.216395 1903322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:07:22.223668 1903322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 14:07:22.238455 1903322 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 14:07:22.242063 1903322 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 14:07:22.242136 1903322 kubeadm.go:391] StartCluster: {Name:addons-457090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-457090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 14:07:22.242226 1903322 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 14:07:22.242297 1903322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 14:07:22.285318 1903322 cri.go:89] found id: ""
	I0429 14:07:22.285386 1903322 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 14:07:22.295888 1903322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 14:07:22.304970 1903322 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0429 14:07:22.305055 1903322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 14:07:22.314032 1903322 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 14:07:22.314052 1903322 kubeadm.go:156] found existing configuration files:
	
	I0429 14:07:22.314122 1903322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 14:07:22.323116 1903322 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 14:07:22.323202 1903322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 14:07:22.331826 1903322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 14:07:22.340979 1903322 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 14:07:22.341072 1903322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 14:07:22.349538 1903322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 14:07:22.358866 1903322 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 14:07:22.358945 1903322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 14:07:22.367425 1903322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 14:07:22.376471 1903322 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 14:07:22.376533 1903322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 14:07:22.384850 1903322 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0429 14:07:22.433339 1903322 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 14:07:22.433632 1903322 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 14:07:22.471872 1903322 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0429 14:07:22.472005 1903322 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1058-aws
	I0429 14:07:22.472072 1903322 kubeadm.go:309] OS: Linux
	I0429 14:07:22.472137 1903322 kubeadm.go:309] CGROUPS_CPU: enabled
	I0429 14:07:22.472211 1903322 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0429 14:07:22.472285 1903322 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0429 14:07:22.472359 1903322 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0429 14:07:22.472431 1903322 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0429 14:07:22.472496 1903322 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0429 14:07:22.472571 1903322 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0429 14:07:22.472639 1903322 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0429 14:07:22.472730 1903322 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0429 14:07:22.538884 1903322 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 14:07:22.539047 1903322 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 14:07:22.539168 1903322 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 14:07:22.785018 1903322 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 14:07:22.788710 1903322 out.go:204]   - Generating certificates and keys ...
	I0429 14:07:22.788826 1903322 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 14:07:22.788909 1903322 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 14:07:24.117498 1903322 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 14:07:24.597909 1903322 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 14:07:24.824332 1903322 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 14:07:25.177026 1903322 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 14:07:25.420914 1903322 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 14:07:25.421062 1903322 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-457090 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0429 14:07:25.608470 1903322 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 14:07:25.608798 1903322 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-457090 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0429 14:07:25.870697 1903322 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 14:07:26.389423 1903322 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 14:07:26.565053 1903322 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 14:07:26.565333 1903322 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 14:07:26.742781 1903322 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 14:07:27.135068 1903322 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 14:07:27.754457 1903322 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 14:07:28.306023 1903322 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 14:07:28.837456 1903322 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 14:07:28.838219 1903322 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 14:07:28.843076 1903322 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 14:07:28.845114 1903322 out.go:204]   - Booting up control plane ...
	I0429 14:07:28.845213 1903322 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 14:07:28.845289 1903322 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 14:07:28.846030 1903322 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 14:07:28.867187 1903322 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 14:07:28.868119 1903322 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 14:07:28.868328 1903322 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 14:07:28.966286 1903322 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 14:07:28.966373 1903322 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 14:07:29.967119 1903322 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.000919123s
	I0429 14:07:29.967239 1903322 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 14:07:36.469977 1903322 kubeadm.go:309] [api-check] The API server is healthy after 6.502849708s
	I0429 14:07:36.490983 1903322 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 14:07:36.505592 1903322 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 14:07:36.530007 1903322 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 14:07:36.530204 1903322 kubeadm.go:309] [mark-control-plane] Marking the node addons-457090 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 14:07:36.540911 1903322 kubeadm.go:309] [bootstrap-token] Using token: 299kq3.syi4mwk6phg59drt
	I0429 14:07:36.542844 1903322 out.go:204]   - Configuring RBAC rules ...
	I0429 14:07:36.542971 1903322 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 14:07:36.547511 1903322 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 14:07:36.556370 1903322 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 14:07:36.560078 1903322 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 14:07:36.564151 1903322 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 14:07:36.568408 1903322 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 14:07:36.877214 1903322 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 14:07:37.320720 1903322 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 14:07:37.876249 1903322 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 14:07:37.877488 1903322 kubeadm.go:309] 
	I0429 14:07:37.877558 1903322 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 14:07:37.877569 1903322 kubeadm.go:309] 
	I0429 14:07:37.877651 1903322 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 14:07:37.877660 1903322 kubeadm.go:309] 
	I0429 14:07:37.877689 1903322 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 14:07:37.877758 1903322 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 14:07:37.877811 1903322 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 14:07:37.877821 1903322 kubeadm.go:309] 
	I0429 14:07:37.877874 1903322 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 14:07:37.877882 1903322 kubeadm.go:309] 
	I0429 14:07:37.877928 1903322 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 14:07:37.877936 1903322 kubeadm.go:309] 
	I0429 14:07:37.877986 1903322 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 14:07:37.878065 1903322 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 14:07:37.878136 1903322 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 14:07:37.878145 1903322 kubeadm.go:309] 
	I0429 14:07:37.878226 1903322 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 14:07:37.878303 1903322 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 14:07:37.878311 1903322 kubeadm.go:309] 
	I0429 14:07:37.878392 1903322 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 299kq3.syi4mwk6phg59drt \
	I0429 14:07:37.878495 1903322 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:21d9b8764194e6fe6c1583ba013e3f02163c5cceb0b910b9847eaf47c168f2e3 \
	I0429 14:07:37.878517 1903322 kubeadm.go:309] 	--control-plane 
	I0429 14:07:37.878526 1903322 kubeadm.go:309] 
	I0429 14:07:37.878608 1903322 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 14:07:37.878630 1903322 kubeadm.go:309] 
	I0429 14:07:37.878711 1903322 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 299kq3.syi4mwk6phg59drt \
	I0429 14:07:37.878813 1903322 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:21d9b8764194e6fe6c1583ba013e3f02163c5cceb0b910b9847eaf47c168f2e3 
	I0429 14:07:37.882282 1903322 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1058-aws\n", err: exit status 1
	I0429 14:07:37.882397 1903322 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 14:07:37.882417 1903322 cni.go:84] Creating CNI manager for ""
	I0429 14:07:37.882425 1903322 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 14:07:37.885486 1903322 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 14:07:37.887217 1903322 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 14:07:37.890984 1903322 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 14:07:37.891003 1903322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 14:07:37.909041 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 14:07:38.177444 1903322 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 14:07:38.177576 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:38.177669 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-457090 minikube.k8s.io/updated_at=2024_04_29T14_07_38_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=94a01df0d4b48636d4af3d06a53be687e06c0844 minikube.k8s.io/name=addons-457090 minikube.k8s.io/primary=true
	I0429 14:07:38.321955 1903322 ops.go:34] apiserver oom_adj: -16
	I0429 14:07:38.322056 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:38.822612 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:39.322187 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:39.822896 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:40.322867 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:40.822668 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:41.322497 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:41.822935 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:42.322739 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:42.822966 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:43.322402 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:43.823123 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:44.322873 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:44.823060 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:45.323239 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:45.822996 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:46.322647 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:46.823086 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:47.322514 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:47.822688 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:48.322988 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:48.822385 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:49.323051 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:49.823158 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:49.910592 1903322 kubeadm.go:1107] duration metric: took 11.733062181s to wait for elevateKubeSystemPrivileges
	W0429 14:07:49.910628 1903322 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 14:07:49.910638 1903322 kubeadm.go:393] duration metric: took 27.668520713s to StartCluster
	I0429 14:07:49.910653 1903322 settings.go:142] acquiring lock: {Name:mkd5b42c61905151cf6a97c69329c4a81e851953 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:49.910769 1903322 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18771-1897267/kubeconfig
	I0429 14:07:49.911211 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/kubeconfig: {Name:mkd7a824e40528d6a3c0c02051ff0aa2d4aebaa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:49.911410 1903322 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 14:07:49.913684 1903322 out.go:177] * Verifying Kubernetes components...
	I0429 14:07:49.911533 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 14:07:49.911693 1903322 config.go:182] Loaded profile config "addons-457090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:07:49.911703 1903322 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0429 14:07:49.915370 1903322 addons.go:69] Setting yakd=true in profile "addons-457090"
	I0429 14:07:49.915397 1903322 addons.go:234] Setting addon yakd=true in "addons-457090"
	I0429 14:07:49.915397 1903322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 14:07:49.915427 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:49.915481 1903322 addons.go:69] Setting ingress-dns=true in profile "addons-457090"
	I0429 14:07:49.915501 1903322 addons.go:234] Setting addon ingress-dns=true in "addons-457090"
	I0429 14:07:49.915531 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:49.915883 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.915904 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.916247 1903322 addons.go:69] Setting inspektor-gadget=true in profile "addons-457090"
	I0429 14:07:49.916271 1903322 addons.go:234] Setting addon inspektor-gadget=true in "addons-457090"
	I0429 14:07:49.916294 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:49.916692 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.916953 1903322 addons.go:69] Setting cloud-spanner=true in profile "addons-457090"
	I0429 14:07:49.916975 1903322 addons.go:234] Setting addon cloud-spanner=true in "addons-457090"
	I0429 14:07:49.917000 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:49.917353 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.919671 1903322 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-457090"
	I0429 14:07:49.919739 1903322 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-457090"
	I0429 14:07:49.919771 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:49.920170 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.929934 1903322 addons.go:69] Setting metrics-server=true in profile "addons-457090"
	I0429 14:07:49.930028 1903322 addons.go:234] Setting addon metrics-server=true in "addons-457090"
	I0429 14:07:49.930095 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:49.930607 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.930792 1903322 addons.go:69] Setting default-storageclass=true in profile "addons-457090"
	I0429 14:07:49.930841 1903322 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-457090"
	I0429 14:07:49.933057 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.949132 1903322 addons.go:69] Setting gcp-auth=true in profile "addons-457090"
	I0429 14:07:49.949241 1903322 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-457090"
	I0429 14:07:49.949264 1903322 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-457090"
	I0429 14:07:49.949301 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:49.949751 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.949952 1903322 mustload.go:65] Loading cluster: addons-457090
	I0429 14:07:49.950107 1903322 config.go:182] Loaded profile config "addons-457090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:07:49.950311 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.952089 1903322 addons.go:69] Setting registry=true in profile "addons-457090"
	I0429 14:07:49.952122 1903322 addons.go:234] Setting addon registry=true in "addons-457090"
	I0429 14:07:49.952161 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:49.952552 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.957809 1903322 addons.go:69] Setting storage-provisioner=true in profile "addons-457090"
	I0429 14:07:49.957909 1903322 addons.go:234] Setting addon storage-provisioner=true in "addons-457090"
	I0429 14:07:49.957976 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:49.958492 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.960511 1903322 addons.go:69] Setting ingress=true in profile "addons-457090"
	I0429 14:07:49.971551 1903322 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-457090"
	I0429 14:07:49.971559 1903322 addons.go:69] Setting volumesnapshots=true in profile "addons-457090"
	I0429 14:07:50.015453 1903322 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0429 14:07:50.017156 1903322 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0429 14:07:50.017185 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0429 14:07:50.017267 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.028914 1903322 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0429 14:07:50.030550 1903322 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 14:07:50.030570 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0429 14:07:50.030637 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.029818 1903322 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-457090"
	I0429 14:07:50.029847 1903322 addons.go:234] Setting addon ingress=true in "addons-457090"
	I0429 14:07:50.029882 1903322 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0429 14:07:50.029898 1903322 addons.go:234] Setting addon volumesnapshots=true in "addons-457090"
	I0429 14:07:50.029906 1903322 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0429 14:07:50.049183 1903322 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0429 14:07:50.049206 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0429 14:07:50.049270 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.051869 1903322 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0429 14:07:50.053709 1903322 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0429 14:07:50.057339 1903322 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0429 14:07:50.064856 1903322 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0429 14:07:50.062898 1903322 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0429 14:07:50.062908 1903322 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0429 14:07:50.063236 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:50.063270 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:50.063306 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:50.072456 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:50.076536 1903322 addons.go:234] Setting addon default-storageclass=true in "addons-457090"
	I0429 14:07:50.077721 1903322 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0429 14:07:50.078221 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:50.092648 1903322 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0429 14:07:50.095524 1903322 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 14:07:50.095548 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 14:07:50.095612 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.092925 1903322 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0429 14:07:50.106440 1903322 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0429 14:07:50.121572 1903322 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0429 14:07:50.105379 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:50.093669 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0429 14:07:50.093707 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:50.093056 1903322 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 14:07:50.128458 1903322 out.go:177]   - Using image docker.io/registry:2.8.3
	I0429 14:07:50.129012 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:50.129041 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.129053 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0429 14:07:50.133540 1903322 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0429 14:07:50.144894 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.149014 1903322 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0429 14:07:50.149161 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0429 14:07:50.149224 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.149058 1903322 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0429 14:07:50.168943 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0429 14:07:50.169018 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.149066 1903322 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 14:07:50.190663 1903322 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 14:07:50.190684 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 14:07:50.190748 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.191017 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.228515 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.229165 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.274594 1903322 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-457090"
	I0429 14:07:50.274637 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:50.275185 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:50.297014 1903322 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0429 14:07:50.309517 1903322 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0429 14:07:50.309585 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0429 14:07:50.309691 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.319701 1903322 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 14:07:50.319725 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 14:07:50.319789 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.340723 1903322 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0429 14:07:50.342647 1903322 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0429 14:07:50.341253 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.339300 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.306863 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.355408 1903322 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0429 14:07:50.358305 1903322 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 14:07:50.358328 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0429 14:07:50.358393 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.396934 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.397286 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.400468 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.402924 1903322 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0429 14:07:50.404772 1903322 out.go:177]   - Using image docker.io/busybox:stable
	I0429 14:07:50.412461 1903322 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 14:07:50.412483 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0429 14:07:50.412544 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.445005 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.446025 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.452786 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.464977 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.540903 1903322 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0429 14:07:50.540930 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0429 14:07:50.573298 1903322 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0429 14:07:50.573324 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0429 14:07:50.624222 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0429 14:07:50.647268 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 14:07:50.691721 1903322 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0429 14:07:50.691794 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0429 14:07:50.704277 1903322 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 14:07:50.704348 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0429 14:07:50.736962 1903322 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0429 14:07:50.737031 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0429 14:07:50.758623 1903322 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0429 14:07:50.758694 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0429 14:07:50.766732 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 14:07:50.780016 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 14:07:50.784356 1903322 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0429 14:07:50.784427 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0429 14:07:50.803286 1903322 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 14:07:50.803355 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 14:07:50.853024 1903322 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0429 14:07:50.853095 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0429 14:07:50.855135 1903322 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0429 14:07:50.855198 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0429 14:07:50.877043 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 14:07:50.879541 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 14:07:50.890989 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 14:07:50.916296 1903322 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0429 14:07:50.916367 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0429 14:07:50.929134 1903322 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.013709315s)
	I0429 14:07:50.929283 1903322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 14:07:50.929163 1903322 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.012327317s)
	I0429 14:07:50.929522 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 14:07:50.933388 1903322 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 14:07:50.933453 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 14:07:50.933661 1903322 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0429 14:07:50.933695 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0429 14:07:50.959215 1903322 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0429 14:07:50.959287 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0429 14:07:51.047320 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0429 14:07:51.066958 1903322 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0429 14:07:51.067031 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0429 14:07:51.125375 1903322 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0429 14:07:51.125450 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0429 14:07:51.138951 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 14:07:51.150957 1903322 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0429 14:07:51.151024 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0429 14:07:51.156115 1903322 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0429 14:07:51.156193 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0429 14:07:51.245351 1903322 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0429 14:07:51.245427 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0429 14:07:51.327769 1903322 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0429 14:07:51.327896 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0429 14:07:51.359887 1903322 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0429 14:07:51.359957 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0429 14:07:51.372063 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0429 14:07:51.445909 1903322 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0429 14:07:51.445936 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0429 14:07:51.507189 1903322 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0429 14:07:51.507216 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0429 14:07:51.511967 1903322 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0429 14:07:51.512002 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0429 14:07:51.561847 1903322 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 14:07:51.561878 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0429 14:07:51.601366 1903322 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0429 14:07:51.601400 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0429 14:07:51.610605 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 14:07:51.648038 1903322 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 14:07:51.648072 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0429 14:07:51.665161 1903322 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0429 14:07:51.665191 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0429 14:07:51.741997 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 14:07:51.744400 1903322 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0429 14:07:51.744422 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0429 14:07:51.806488 1903322 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0429 14:07:51.806510 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0429 14:07:51.899739 1903322 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 14:07:51.899766 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0429 14:07:51.972500 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 14:07:54.197102 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.5728438s)
	I0429 14:07:54.335169 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.687865488s)
	I0429 14:07:54.335255 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.568457053s)
	I0429 14:07:54.628080 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.847989845s)
	I0429 14:07:54.824025 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.946906153s)
	I0429 14:07:55.993028 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.113403778s)
	I0429 14:07:55.993130 1903322 addons.go:470] Verifying addon ingress=true in "addons-457090"
	I0429 14:07:55.995630 1903322 out.go:177] * Verifying ingress addon...
	I0429 14:07:55.993400 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.102332723s)
	I0429 14:07:55.993520 1903322 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.063959433s)
	I0429 14:07:55.993536 1903322 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.06421265s)
	I0429 14:07:55.993568 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.946177269s)
	I0429 14:07:55.993644 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.85462323s)
	I0429 14:07:55.993694 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.621596981s)
	I0429 14:07:55.993766 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.383134718s)
	I0429 14:07:55.994004 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.251824062s)
	I0429 14:07:55.996100 1903322 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0429 14:07:55.997092 1903322 node_ready.go:35] waiting up to 6m0s for node "addons-457090" to be "Ready" ...
	I0429 14:07:55.997444 1903322 addons.go:470] Verifying addon registry=true in "addons-457090"
	I0429 14:07:55.997454 1903322 addons.go:470] Verifying addon metrics-server=true in "addons-457090"
	W0429 14:07:55.997489 1903322 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 14:07:56.000393 1903322 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0429 14:07:56.002674 1903322 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-457090 service yakd-dashboard -n yakd-dashboard
	
	I0429 14:07:56.003031 1903322 retry.go:31] will retry after 212.063025ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 14:07:56.007199 1903322 out.go:177] * Verifying registry addon...
	I0429 14:07:56.010788 1903322 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0429 14:07:56.028788 1903322 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0429 14:07:56.028865 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:07:56.036513 1903322 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0429 14:07:56.036543 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:07:56.217915 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 14:07:56.561541 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:07:56.570893 1903322 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-457090" context rescaled to 1 replicas
	I0429 14:07:56.584979 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:07:56.634986 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.662430579s)
	I0429 14:07:56.635022 1903322 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-457090"
	I0429 14:07:56.637647 1903322 out.go:177] * Verifying csi-hostpath-driver addon...
	I0429 14:07:56.640760 1903322 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0429 14:07:56.701452 1903322 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0429 14:07:56.701483 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:07:57.043821 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:07:57.057102 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:07:57.146112 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:07:57.510995 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:07:57.526434 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:07:57.645629 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:07:58.008038 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:07:58.009705 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:07:58.017299 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:07:58.148147 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:07:58.512827 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:07:58.520136 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:07:58.645583 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:07:59.008276 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:07:59.022101 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:07:59.150329 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:07:59.366143 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.148138012s)
	I0429 14:07:59.523490 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:07:59.525059 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:07:59.651360 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:07:59.798381 1903322 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0429 14:07:59.798464 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:59.816011 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:59.949419 1903322 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0429 14:07:59.973268 1903322 addons.go:234] Setting addon gcp-auth=true in "addons-457090"
	I0429 14:07:59.973325 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:59.973810 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:08:00.011262 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:00.011829 1903322 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0429 14:08:00.011883 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:08:00.027129 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:00.061153 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:00.072494 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:08:00.154502 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:00.241090 1903322 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0429 14:08:00.242979 1903322 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0429 14:08:00.245083 1903322 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0429 14:08:00.245119 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0429 14:08:00.278775 1903322 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0429 14:08:00.278812 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0429 14:08:00.322424 1903322 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 14:08:00.322459 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0429 14:08:00.371411 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 14:08:00.515857 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:00.535594 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:00.648551 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:01.031670 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:01.032384 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:01.130715 1903322 addons.go:470] Verifying addon gcp-auth=true in "addons-457090"
	I0429 14:08:01.132557 1903322 out.go:177] * Verifying gcp-auth addon...
	I0429 14:08:01.135413 1903322 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0429 14:08:01.138889 1903322 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0429 14:08:01.138958 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:01.146750 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:01.508559 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:01.515198 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:01.640962 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:01.646725 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:02.014104 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:02.014936 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:02.015821 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:02.143196 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:02.146460 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:02.509237 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:02.515537 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:02.639560 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:02.645004 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:03.008622 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:03.015558 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:03.139620 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:03.145802 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:03.506816 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:03.515751 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:03.638837 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:03.645428 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:04.008518 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:04.015659 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:04.139342 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:04.145258 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:04.507480 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:04.509357 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:04.515205 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:04.639700 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:04.644503 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:05.012192 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:05.015813 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:05.138788 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:05.145520 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:05.507237 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:05.515609 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:05.638611 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:05.645060 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:06.014050 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:06.016144 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:06.139306 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:06.145407 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:06.507694 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:06.515899 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:06.639255 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:06.645493 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:07.004576 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:07.007626 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:07.014335 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:07.138795 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:07.145559 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:07.507403 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:07.515271 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:07.640123 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:07.650958 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:08.007893 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:08.014798 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:08.138752 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:08.144537 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:08.507862 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:08.514281 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:08.639249 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:08.644804 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:09.004787 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:09.007580 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:09.015695 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:09.139923 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:09.145811 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:09.507980 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:09.514570 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:09.639988 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:09.645813 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:10.018178 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:10.019237 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:10.139448 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:10.146165 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:10.506698 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:10.515870 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:10.639113 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:10.644405 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:11.005357 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:11.008494 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:11.015166 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:11.139191 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:11.145140 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:11.507443 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:11.515233 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:11.639341 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:11.645089 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:12.009004 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:12.015369 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:12.138806 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:12.144633 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:12.507100 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:12.514840 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:12.638754 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:12.645112 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:13.007413 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:13.015326 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:13.139226 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:13.145274 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:13.505418 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:13.507384 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:13.515108 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:13.639279 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:13.645196 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:14.007725 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:14.014710 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:14.138648 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:14.145012 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:14.507108 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:14.514699 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:14.638913 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:14.645356 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:15.010342 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:15.015482 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:15.139760 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:15.144821 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:15.507317 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:15.515017 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:15.639171 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:15.645879 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:16.008032 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:16.009412 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:16.015087 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:16.139478 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:16.145315 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:16.506567 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:16.515398 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:16.639288 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:16.644554 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:17.007209 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:17.015014 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:17.139314 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:17.145150 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:17.507411 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:17.515170 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:17.639160 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:17.645708 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:18.007797 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:18.014691 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:18.139583 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:18.145594 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:18.509943 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:18.510734 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:18.514517 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:18.638876 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:18.645258 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:19.008023 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:19.014984 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:19.138888 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:19.144828 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:19.507617 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:19.514630 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:19.639473 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:19.645131 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:20.008532 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:20.015915 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:20.139344 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:20.145052 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:20.506958 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:20.514631 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:20.638851 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:20.644533 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:21.005060 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:21.007906 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:21.015149 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:21.138816 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:21.145375 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:21.506952 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:21.515123 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:21.641040 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:21.644565 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:22.007730 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:22.014576 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:22.139605 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:22.144783 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:22.510010 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:22.519253 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:22.640131 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:22.658432 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:23.007802 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:23.014473 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:23.138889 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:23.146064 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:23.506395 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:23.508040 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:23.515822 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:23.639220 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:23.644886 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:24.013314 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:24.016747 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:24.139568 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:24.145203 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:24.507970 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:24.514718 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:24.638394 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:24.644542 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:25.007444 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:25.015770 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:25.139390 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:25.144876 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:25.507835 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:25.514486 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:25.638835 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:25.644465 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:26.044461 1903322 node_ready.go:49] node "addons-457090" has status "Ready":"True"
	I0429 14:08:26.044497 1903322 node_ready.go:38] duration metric: took 30.043019374s for node "addons-457090" to be "Ready" ...
	I0429 14:08:26.044507 1903322 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 14:08:26.067267 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:26.069808 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:26.080150 1903322 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8c59t" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:26.140707 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:26.148265 1903322 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0429 14:08:26.148288 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:26.508028 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:26.515823 1903322 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0429 14:08:26.515850 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:26.640555 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:26.647268 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:27.039801 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:27.046087 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:27.140638 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:27.147048 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:27.512998 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:27.517960 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:27.639472 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:27.646819 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:28.013410 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:28.019266 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:28.093190 1903322 pod_ready.go:102] pod "coredns-7db6d8ff4d-8c59t" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:28.140117 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:28.148431 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:28.507119 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:28.515433 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:28.638889 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:28.647310 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:29.008536 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:29.026234 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:29.088360 1903322 pod_ready.go:92] pod "coredns-7db6d8ff4d-8c59t" in "kube-system" namespace has status "Ready":"True"
	I0429 14:08:29.088429 1903322 pod_ready.go:81] duration metric: took 3.008244056s for pod "coredns-7db6d8ff4d-8c59t" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.088466 1903322 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-457090" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.103390 1903322 pod_ready.go:92] pod "etcd-addons-457090" in "kube-system" namespace has status "Ready":"True"
	I0429 14:08:29.103463 1903322 pod_ready.go:81] duration metric: took 14.975458ms for pod "etcd-addons-457090" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.103492 1903322 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-457090" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.123660 1903322 pod_ready.go:92] pod "kube-apiserver-addons-457090" in "kube-system" namespace has status "Ready":"True"
	I0429 14:08:29.123743 1903322 pod_ready.go:81] duration metric: took 20.229695ms for pod "kube-apiserver-addons-457090" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.123772 1903322 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-457090" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.142397 1903322 pod_ready.go:92] pod "kube-controller-manager-addons-457090" in "kube-system" namespace has status "Ready":"True"
	I0429 14:08:29.142426 1903322 pod_ready.go:81] duration metric: took 18.62072ms for pod "kube-controller-manager-addons-457090" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.142439 1903322 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6wf6b" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.152851 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:29.161162 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:29.172163 1903322 pod_ready.go:92] pod "kube-proxy-6wf6b" in "kube-system" namespace has status "Ready":"True"
	I0429 14:08:29.172186 1903322 pod_ready.go:81] duration metric: took 29.739135ms for pod "kube-proxy-6wf6b" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.172199 1903322 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-457090" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.483970 1903322 pod_ready.go:92] pod "kube-scheduler-addons-457090" in "kube-system" namespace has status "Ready":"True"
	I0429 14:08:29.484041 1903322 pod_ready.go:81] duration metric: took 311.833609ms for pod "kube-scheduler-addons-457090" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.484067 1903322 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.507572 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:29.515762 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:29.639709 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:29.647914 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:30.030657 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:30.031622 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:30.139210 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:30.146867 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:30.508269 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:30.515436 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:30.638957 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:30.646445 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:31.008947 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:31.027309 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:31.140632 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:31.151130 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:31.491722 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:31.509796 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:31.517535 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:31.639600 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:31.651727 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:32.008743 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:32.016128 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:32.139917 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:32.147717 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:32.508310 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:32.515351 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:32.638741 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:32.645649 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:33.010766 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:33.018472 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:33.139588 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:33.147014 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:33.507288 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:33.517131 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:33.638902 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:33.647073 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:33.994894 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:34.008296 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:34.016905 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:34.140222 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:34.146657 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:34.523155 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:34.530753 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:34.639183 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:34.646152 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:35.010738 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:35.023576 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:35.139261 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:35.147117 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:35.507780 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:35.515142 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:35.639070 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:35.647289 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:36.008203 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:36.016100 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:36.143342 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:36.148145 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:36.492145 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:36.509774 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:36.519276 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:36.639238 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:36.658882 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:37.012813 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:37.017175 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:37.140889 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:37.151210 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:37.507863 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:37.516228 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:37.640061 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:37.654334 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:38.008084 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:38.015762 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:38.139682 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:38.148821 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:38.507665 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:38.517222 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:38.639793 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:38.646473 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:38.990320 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:39.007877 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:39.015737 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:39.139279 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:39.146250 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:39.514899 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:39.520734 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:39.640149 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:39.647831 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:40.010518 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:40.025994 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:40.140052 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:40.148553 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:40.521677 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:40.532967 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:40.639968 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:40.647031 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:41.008845 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:41.015996 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:41.139759 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:41.147657 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:41.494119 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:41.512731 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:41.520265 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:41.639045 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:41.647144 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:42.008345 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:42.017627 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:42.140522 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:42.148994 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:42.509151 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:42.517060 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:42.650783 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:42.659765 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:43.012076 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:43.052371 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:43.139112 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:43.150490 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:43.517010 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:43.523087 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:43.640000 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:43.648180 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:43.989914 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:44.007867 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:44.016471 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:44.139455 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:44.146397 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:44.507239 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:44.516127 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:44.639360 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:44.645790 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:45.008573 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:45.016744 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:45.155175 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:45.175149 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:45.511595 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:45.516184 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:45.640402 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:45.651739 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:45.991087 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:46.007594 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:46.016198 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:46.140009 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:46.146526 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:46.517392 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:46.518243 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:46.641554 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:46.648488 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:47.008318 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:47.016413 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:47.140654 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:47.148267 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:47.507696 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:47.516707 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:47.640970 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:47.649665 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:47.999897 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:48.013694 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:48.042562 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:48.140071 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:48.156046 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:48.517233 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:48.523942 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:48.643774 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:48.649773 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:49.013249 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:49.043986 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:49.139589 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:49.146587 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:49.518656 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:49.524650 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:49.639111 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:49.648034 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:50.017849 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:50.020597 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:50.140216 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:50.148972 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:50.490895 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:50.508487 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:50.525061 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:50.639759 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:50.650033 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:51.020179 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:51.029921 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:51.140571 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:51.149090 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:51.552119 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:51.560720 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:51.640484 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:51.655071 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:52.008141 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:52.016284 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:52.139669 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:52.147674 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:52.507166 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:52.511472 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:52.517233 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:52.640009 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:52.647647 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:53.008089 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:53.017412 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:53.139321 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:53.146579 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:53.509685 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:53.518295 1903322 kapi.go:107] duration metric: took 57.507506632s to wait for kubernetes.io/minikube-addons=registry ...
	I0429 14:08:53.640421 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:53.652140 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:54.008999 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:54.141025 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:54.151587 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:54.507563 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:54.639154 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:54.646347 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:54.990348 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:55.008793 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:55.139380 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:55.146745 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:55.508177 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:55.644297 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:55.648322 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:56.008573 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:56.139684 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:56.151071 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:56.522894 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:56.639387 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:56.659229 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:56.991108 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:57.008803 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:57.140625 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:57.149556 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:57.510060 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:57.641199 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:57.663306 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:58.008986 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:58.139748 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:58.148260 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:58.512494 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:58.638893 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:58.646650 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:59.007213 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:59.139579 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:59.145929 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:59.490378 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:59.507586 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:59.638895 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:59.646516 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:00.015312 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:00.150281 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:00.164775 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:00.520597 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:00.639656 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:00.648651 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:01.007795 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:01.139558 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:01.147863 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:01.491990 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:01.507068 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:01.639467 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:01.647552 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:02.009413 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:02.138757 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:02.151143 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:02.516098 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:02.640087 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:02.646177 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:03.008503 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:03.139339 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:03.147774 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:03.500345 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:03.513139 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:03.643637 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:03.656545 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:04.009055 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:04.139773 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:04.150052 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:04.507326 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:04.638597 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:04.646794 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:05.007683 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:05.139247 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:05.147652 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:05.511953 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:05.639303 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:05.647798 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:05.991312 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:06.008445 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:06.139758 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:06.152922 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:06.521280 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:06.645371 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:06.651703 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:07.008121 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:07.139660 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:07.146795 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:07.524140 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:07.640350 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:07.647536 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:08.011325 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:08.143024 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:08.151350 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:08.491790 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:08.515336 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:08.639314 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:08.647397 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:09.008058 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:09.140088 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:09.150748 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:09.520409 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:09.639539 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:09.648719 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:10.020093 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:10.140192 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:10.151764 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:10.507742 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:10.639527 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:10.646669 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:10.990747 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:11.008338 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:11.138764 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:11.146495 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:11.507914 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:11.639437 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:11.646259 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:12.008372 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:12.139327 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:12.146674 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:12.510741 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:12.639244 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:12.649794 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:13.008901 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:13.143842 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:13.153012 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:13.490791 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:13.508000 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:13.639277 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:13.646854 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:14.008179 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:14.140199 1903322 kapi.go:107] duration metric: took 1m13.004776545s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0429 14:09:14.143250 1903322 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-457090 cluster.
	I0429 14:09:14.145570 1903322 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0429 14:09:14.147631 1903322 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0429 14:09:14.149831 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:14.513227 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:14.646964 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:15.008855 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:15.147306 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:15.492302 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:15.507570 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:15.647566 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:16.007916 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:16.146918 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:16.518043 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:16.657937 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:17.008506 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:17.148455 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:17.516289 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:17.647042 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:17.991233 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:18.008533 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:18.148065 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:18.508543 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:18.647751 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:19.007905 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:19.153322 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:19.510409 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:19.653298 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:19.997800 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:20.017589 1903322 kapi.go:107] duration metric: took 1m24.017190607s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0429 14:09:20.151989 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:20.647141 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:21.150236 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:21.646912 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:21.998511 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:22.146778 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:22.647046 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:23.146860 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:23.647269 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:24.021366 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:24.148960 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:24.645830 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:25.146843 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:25.648512 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:26.147564 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:26.492339 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:26.661342 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:27.146844 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:27.646664 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:28.148459 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:28.494577 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:28.647414 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:29.146552 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:29.646341 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:30.147073 1903322 kapi.go:107] duration metric: took 1m33.506311851s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0429 14:09:30.150713 1903322 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0429 14:09:30.152876 1903322 addons.go:505] duration metric: took 1m40.241163798s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner storage-provisioner-rancher inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0429 14:09:30.990475 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:33.489517 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:35.490130 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:37.490807 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:39.491004 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:41.491681 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:43.990261 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:46.491983 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:48.991487 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:51.490057 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:53.490838 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:54.490716 1903322 pod_ready.go:92] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"True"
	I0429 14:09:54.490746 1903322 pod_ready.go:81] duration metric: took 1m25.006658025s for pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace to be "Ready" ...
	I0429 14:09:54.490758 1903322 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-b6fbn" in "kube-system" namespace to be "Ready" ...
	I0429 14:09:54.501917 1903322 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-b6fbn" in "kube-system" namespace has status "Ready":"True"
	I0429 14:09:54.501943 1903322 pod_ready.go:81] duration metric: took 11.177606ms for pod "nvidia-device-plugin-daemonset-b6fbn" in "kube-system" namespace to be "Ready" ...
	I0429 14:09:54.501964 1903322 pod_ready.go:38] duration metric: took 1m28.457444913s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 14:09:54.501980 1903322 api_server.go:52] waiting for apiserver process to appear ...
	I0429 14:09:54.502013 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 14:09:54.502078 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 14:09:54.553780 1903322 cri.go:89] found id: "8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93"
	I0429 14:09:54.553804 1903322 cri.go:89] found id: ""
	I0429 14:09:54.553812 1903322 logs.go:276] 1 containers: [8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93]
	I0429 14:09:54.553886 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:09:54.558061 1903322 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 14:09:54.558175 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 14:09:54.602110 1903322 cri.go:89] found id: "3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c"
	I0429 14:09:54.602130 1903322 cri.go:89] found id: ""
	I0429 14:09:54.602138 1903322 logs.go:276] 1 containers: [3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c]
	I0429 14:09:54.602211 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:09:54.605641 1903322 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 14:09:54.605735 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 14:09:54.649461 1903322 cri.go:89] found id: "a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5"
	I0429 14:09:54.649485 1903322 cri.go:89] found id: ""
	I0429 14:09:54.649493 1903322 logs.go:276] 1 containers: [a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5]
	I0429 14:09:54.649546 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:09:54.653026 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 14:09:54.653123 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 14:09:54.693830 1903322 cri.go:89] found id: "dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93"
	I0429 14:09:54.693853 1903322 cri.go:89] found id: ""
	I0429 14:09:54.693861 1903322 logs.go:276] 1 containers: [dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93]
	I0429 14:09:54.693936 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:09:54.698361 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 14:09:54.698433 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 14:09:54.737632 1903322 cri.go:89] found id: "99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67"
	I0429 14:09:54.737653 1903322 cri.go:89] found id: ""
	I0429 14:09:54.737660 1903322 logs.go:276] 1 containers: [99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67]
	I0429 14:09:54.737725 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:09:54.741122 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 14:09:54.741188 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 14:09:54.777864 1903322 cri.go:89] found id: "99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034"
	I0429 14:09:54.777894 1903322 cri.go:89] found id: ""
	I0429 14:09:54.777902 1903322 logs.go:276] 1 containers: [99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034]
	I0429 14:09:54.777957 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:09:54.781446 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 14:09:54.781510 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 14:09:54.818099 1903322 cri.go:89] found id: "0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc"
	I0429 14:09:54.818121 1903322 cri.go:89] found id: ""
	I0429 14:09:54.818130 1903322 logs.go:276] 1 containers: [0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc]
	I0429 14:09:54.818184 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:09:54.821891 1903322 logs.go:123] Gathering logs for kindnet [0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc] ...
	I0429 14:09:54.821927 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc"
	I0429 14:09:54.865904 1903322 logs.go:123] Gathering logs for CRI-O ...
	I0429 14:09:54.865930 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 14:09:54.957217 1903322 logs.go:123] Gathering logs for kubelet ...
	I0429 14:09:54.957254 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 14:09:55.016566 1903322 logs.go:138] Found kubelet problem: Apr 29 14:08:26 addons-457090 kubelet[1499]: W0429 14:08:26.020696    1499 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	W0429 14:09:55.016790 1903322 logs.go:138] Found kubelet problem: Apr 29 14:08:26 addons-457090 kubelet[1499]: E0429 14:08:26.020738    1499 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	I0429 14:09:55.050896 1903322 logs.go:123] Gathering logs for dmesg ...
	I0429 14:09:55.050931 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 14:09:55.073566 1903322 logs.go:123] Gathering logs for etcd [3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c] ...
	I0429 14:09:55.073599 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c"
	I0429 14:09:55.127319 1903322 logs.go:123] Gathering logs for coredns [a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5] ...
	I0429 14:09:55.127352 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5"
	I0429 14:09:55.168253 1903322 logs.go:123] Gathering logs for kube-scheduler [dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93] ...
	I0429 14:09:55.168290 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93"
	I0429 14:09:55.206761 1903322 logs.go:123] Gathering logs for kube-controller-manager [99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034] ...
	I0429 14:09:55.206789 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034"
	I0429 14:09:55.292396 1903322 logs.go:123] Gathering logs for container status ...
	I0429 14:09:55.292437 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 14:09:55.341726 1903322 logs.go:123] Gathering logs for describe nodes ...
	I0429 14:09:55.341758 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 14:09:55.516443 1903322 logs.go:123] Gathering logs for kube-apiserver [8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93] ...
	I0429 14:09:55.516476 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93"
	I0429 14:09:55.570609 1903322 logs.go:123] Gathering logs for kube-proxy [99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67] ...
	I0429 14:09:55.570647 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67"
	I0429 14:09:55.608397 1903322 out.go:304] Setting ErrFile to fd 2...
	I0429 14:09:55.608420 1903322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 14:09:55.608481 1903322 out.go:239] X Problems detected in kubelet:
	W0429 14:09:55.608492 1903322 out.go:239]   Apr 29 14:08:26 addons-457090 kubelet[1499]: W0429 14:08:26.020696    1499 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	W0429 14:09:55.608505 1903322 out.go:239]   Apr 29 14:08:26 addons-457090 kubelet[1499]: E0429 14:08:26.020738    1499 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	I0429 14:09:55.608513 1903322 out.go:304] Setting ErrFile to fd 2...
	I0429 14:09:55.608523 1903322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:10:05.610023 1903322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 14:10:05.623923 1903322 api_server.go:72] duration metric: took 2m15.712477396s to wait for apiserver process to appear ...
	I0429 14:10:05.623947 1903322 api_server.go:88] waiting for apiserver healthz status ...
	I0429 14:10:05.623982 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 14:10:05.624041 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 14:10:05.660696 1903322 cri.go:89] found id: "8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93"
	I0429 14:10:05.660716 1903322 cri.go:89] found id: ""
	I0429 14:10:05.660724 1903322 logs.go:276] 1 containers: [8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93]
	I0429 14:10:05.660789 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:05.664360 1903322 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 14:10:05.664424 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 14:10:05.704736 1903322 cri.go:89] found id: "3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c"
	I0429 14:10:05.704759 1903322 cri.go:89] found id: ""
	I0429 14:10:05.704768 1903322 logs.go:276] 1 containers: [3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c]
	I0429 14:10:05.704825 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:05.708530 1903322 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 14:10:05.708603 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 14:10:05.747689 1903322 cri.go:89] found id: "a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5"
	I0429 14:10:05.747708 1903322 cri.go:89] found id: ""
	I0429 14:10:05.747717 1903322 logs.go:276] 1 containers: [a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5]
	I0429 14:10:05.747784 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:05.751408 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 14:10:05.751476 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 14:10:05.792586 1903322 cri.go:89] found id: "dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93"
	I0429 14:10:05.792608 1903322 cri.go:89] found id: ""
	I0429 14:10:05.792615 1903322 logs.go:276] 1 containers: [dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93]
	I0429 14:10:05.792682 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:05.796183 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 14:10:05.796259 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 14:10:05.838053 1903322 cri.go:89] found id: "99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67"
	I0429 14:10:05.838074 1903322 cri.go:89] found id: ""
	I0429 14:10:05.838082 1903322 logs.go:276] 1 containers: [99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67]
	I0429 14:10:05.838138 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:05.841960 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 14:10:05.842031 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 14:10:05.883585 1903322 cri.go:89] found id: "99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034"
	I0429 14:10:05.883606 1903322 cri.go:89] found id: ""
	I0429 14:10:05.883614 1903322 logs.go:276] 1 containers: [99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034]
	I0429 14:10:05.883671 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:05.887338 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 14:10:05.887438 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 14:10:05.927908 1903322 cri.go:89] found id: "0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc"
	I0429 14:10:05.927931 1903322 cri.go:89] found id: ""
	I0429 14:10:05.927939 1903322 logs.go:276] 1 containers: [0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc]
	I0429 14:10:05.928029 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:05.931551 1903322 logs.go:123] Gathering logs for describe nodes ...
	I0429 14:10:05.931576 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 14:10:06.086357 1903322 logs.go:123] Gathering logs for etcd [3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c] ...
	I0429 14:10:06.086394 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c"
	I0429 14:10:06.136506 1903322 logs.go:123] Gathering logs for coredns [a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5] ...
	I0429 14:10:06.136540 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5"
	I0429 14:10:06.178980 1903322 logs.go:123] Gathering logs for kube-scheduler [dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93] ...
	I0429 14:10:06.179009 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93"
	I0429 14:10:06.221801 1903322 logs.go:123] Gathering logs for kube-proxy [99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67] ...
	I0429 14:10:06.221831 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67"
	I0429 14:10:06.269496 1903322 logs.go:123] Gathering logs for kube-controller-manager [99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034] ...
	I0429 14:10:06.269525 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034"
	I0429 14:10:06.341263 1903322 logs.go:123] Gathering logs for CRI-O ...
	I0429 14:10:06.341306 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 14:10:06.438392 1903322 logs.go:123] Gathering logs for kubelet ...
	I0429 14:10:06.438433 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 14:10:06.491821 1903322 logs.go:138] Found kubelet problem: Apr 29 14:08:26 addons-457090 kubelet[1499]: W0429 14:08:26.020696    1499 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	W0429 14:10:06.492033 1903322 logs.go:138] Found kubelet problem: Apr 29 14:08:26 addons-457090 kubelet[1499]: E0429 14:08:26.020738    1499 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	I0429 14:10:06.526818 1903322 logs.go:123] Gathering logs for container status ...
	I0429 14:10:06.526847 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 14:10:06.583368 1903322 logs.go:123] Gathering logs for kube-apiserver [8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93] ...
	I0429 14:10:06.583399 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93"
	I0429 14:10:06.639776 1903322 logs.go:123] Gathering logs for kindnet [0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc] ...
	I0429 14:10:06.639815 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc"
	I0429 14:10:06.687108 1903322 logs.go:123] Gathering logs for dmesg ...
	I0429 14:10:06.687137 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 14:10:06.707252 1903322 out.go:304] Setting ErrFile to fd 2...
	I0429 14:10:06.707289 1903322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 14:10:06.707463 1903322 out.go:239] X Problems detected in kubelet:
	W0429 14:10:06.707483 1903322 out.go:239]   Apr 29 14:08:26 addons-457090 kubelet[1499]: W0429 14:08:26.020696    1499 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	W0429 14:10:06.707518 1903322 out.go:239]   Apr 29 14:08:26 addons-457090 kubelet[1499]: E0429 14:08:26.020738    1499 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	I0429 14:10:06.707534 1903322 out.go:304] Setting ErrFile to fd 2...
	I0429 14:10:06.707541 1903322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:10:16.708970 1903322 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:10:16.716560 1903322 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0429 14:10:16.717569 1903322 api_server.go:141] control plane version: v1.30.0
	I0429 14:10:16.717603 1903322 api_server.go:131] duration metric: took 11.093647819s to wait for apiserver health ...
	I0429 14:10:16.717612 1903322 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 14:10:16.717634 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 14:10:16.717695 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 14:10:16.756049 1903322 cri.go:89] found id: "8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93"
	I0429 14:10:16.756073 1903322 cri.go:89] found id: ""
	I0429 14:10:16.756082 1903322 logs.go:276] 1 containers: [8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93]
	I0429 14:10:16.756140 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:16.759590 1903322 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 14:10:16.759664 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 14:10:16.797693 1903322 cri.go:89] found id: "3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c"
	I0429 14:10:16.797713 1903322 cri.go:89] found id: ""
	I0429 14:10:16.797721 1903322 logs.go:276] 1 containers: [3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c]
	I0429 14:10:16.797777 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:16.801270 1903322 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 14:10:16.801353 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 14:10:16.838206 1903322 cri.go:89] found id: "a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5"
	I0429 14:10:16.838232 1903322 cri.go:89] found id: ""
	I0429 14:10:16.838240 1903322 logs.go:276] 1 containers: [a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5]
	I0429 14:10:16.838297 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:16.841894 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 14:10:16.841963 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 14:10:16.880739 1903322 cri.go:89] found id: "dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93"
	I0429 14:10:16.880760 1903322 cri.go:89] found id: ""
	I0429 14:10:16.880768 1903322 logs.go:276] 1 containers: [dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93]
	I0429 14:10:16.880832 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:16.884327 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 14:10:16.884391 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 14:10:16.923260 1903322 cri.go:89] found id: "99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67"
	I0429 14:10:16.923349 1903322 cri.go:89] found id: ""
	I0429 14:10:16.923384 1903322 logs.go:276] 1 containers: [99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67]
	I0429 14:10:16.923478 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:16.927178 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 14:10:16.927252 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 14:10:16.965465 1903322 cri.go:89] found id: "99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034"
	I0429 14:10:16.965485 1903322 cri.go:89] found id: ""
	I0429 14:10:16.965493 1903322 logs.go:276] 1 containers: [99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034]
	I0429 14:10:16.965547 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:16.969241 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 14:10:16.969311 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 14:10:17.018546 1903322 cri.go:89] found id: "0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc"
	I0429 14:10:17.018568 1903322 cri.go:89] found id: ""
	I0429 14:10:17.018576 1903322 logs.go:276] 1 containers: [0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc]
	I0429 14:10:17.018633 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:17.026816 1903322 logs.go:123] Gathering logs for dmesg ...
	I0429 14:10:17.026840 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 14:10:17.045853 1903322 logs.go:123] Gathering logs for etcd [3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c] ...
	I0429 14:10:17.045883 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c"
	I0429 14:10:17.096278 1903322 logs.go:123] Gathering logs for kube-proxy [99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67] ...
	I0429 14:10:17.096310 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67"
	I0429 14:10:17.136366 1903322 logs.go:123] Gathering logs for kindnet [0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc] ...
	I0429 14:10:17.136397 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc"
	I0429 14:10:17.182834 1903322 logs.go:123] Gathering logs for kubelet ...
	I0429 14:10:17.182862 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 14:10:17.207800 1903322 logs.go:138] Found kubelet problem: Apr 29 14:08:26 addons-457090 kubelet[1499]: W0429 14:08:26.020696    1499 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	W0429 14:10:17.208005 1903322 logs.go:138] Found kubelet problem: Apr 29 14:08:26 addons-457090 kubelet[1499]: E0429 14:08:26.020738    1499 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	I0429 14:10:17.262314 1903322 logs.go:123] Gathering logs for kube-apiserver [8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93] ...
	I0429 14:10:17.262350 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93"
	I0429 14:10:17.334510 1903322 logs.go:123] Gathering logs for coredns [a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5] ...
	I0429 14:10:17.334548 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5"
	I0429 14:10:17.375266 1903322 logs.go:123] Gathering logs for kube-scheduler [dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93] ...
	I0429 14:10:17.375296 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93"
	I0429 14:10:17.412134 1903322 logs.go:123] Gathering logs for kube-controller-manager [99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034] ...
	I0429 14:10:17.412162 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034"
	I0429 14:10:17.480212 1903322 logs.go:123] Gathering logs for CRI-O ...
	I0429 14:10:17.480249 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 14:10:17.579098 1903322 logs.go:123] Gathering logs for container status ...
	I0429 14:10:17.579137 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 14:10:17.630146 1903322 logs.go:123] Gathering logs for describe nodes ...
	I0429 14:10:17.630179 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 14:10:17.765303 1903322 out.go:304] Setting ErrFile to fd 2...
	I0429 14:10:17.765329 1903322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 14:10:17.765382 1903322 out.go:239] X Problems detected in kubelet:
	W0429 14:10:17.765391 1903322 out.go:239]   Apr 29 14:08:26 addons-457090 kubelet[1499]: W0429 14:08:26.020696    1499 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	W0429 14:10:17.765399 1903322 out.go:239]   Apr 29 14:08:26 addons-457090 kubelet[1499]: E0429 14:08:26.020738    1499 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	I0429 14:10:17.765412 1903322 out.go:304] Setting ErrFile to fd 2...
	I0429 14:10:17.765419 1903322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:10:27.776417 1903322 system_pods.go:59] 18 kube-system pods found
	I0429 14:10:27.776453 1903322 system_pods.go:61] "coredns-7db6d8ff4d-8c59t" [6db81098-176e-4ea8-b78f-36bbcc52095f] Running
	I0429 14:10:27.776460 1903322 system_pods.go:61] "csi-hostpath-attacher-0" [62cdde81-fe62-4de1-817e-071809366cc1] Running
	I0429 14:10:27.776464 1903322 system_pods.go:61] "csi-hostpath-resizer-0" [107e3b64-e1dd-4011-b5b1-dfccb55c7ee4] Running
	I0429 14:10:27.776469 1903322 system_pods.go:61] "csi-hostpathplugin-pdrr9" [e6a7f56a-7b70-452a-980b-3db7b5e261c1] Running
	I0429 14:10:27.776473 1903322 system_pods.go:61] "etcd-addons-457090" [b193ac16-9a2e-4f2c-a710-09df74520cce] Running
	I0429 14:10:27.776477 1903322 system_pods.go:61] "kindnet-tvhsm" [4efdf177-7bfb-4e88-a045-4b64aad67f6a] Running
	I0429 14:10:27.776481 1903322 system_pods.go:61] "kube-apiserver-addons-457090" [5eb570ba-cbb2-4426-8cdd-7d80c357c572] Running
	I0429 14:10:27.776486 1903322 system_pods.go:61] "kube-controller-manager-addons-457090" [8a95a1b5-02f1-407d-8309-30984a5e118b] Running
	I0429 14:10:27.776526 1903322 system_pods.go:61] "kube-ingress-dns-minikube" [757aca19-0d56-4052-975e-6621832dc1b4] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0429 14:10:27.776545 1903322 system_pods.go:61] "kube-proxy-6wf6b" [d2a0a51b-b9e6-4e8c-b402-97a2fc9400ed] Running
	I0429 14:10:27.776567 1903322 system_pods.go:61] "kube-scheduler-addons-457090" [7949e7cb-686e-48b8-a52e-ce65e292de69] Running
	I0429 14:10:27.776586 1903322 system_pods.go:61] "metrics-server-c59844bb4-hltz2" [aedce136-b59d-41a1-83ba-037b4f9e9302] Running
	I0429 14:10:27.776614 1903322 system_pods.go:61] "nvidia-device-plugin-daemonset-b6fbn" [d72d7bb4-220a-44af-9b8f-8b406f53e814] Running
	I0429 14:10:27.776632 1903322 system_pods.go:61] "registry-proxy-96wq6" [1b8f503a-0540-4820-bd92-04b584ad56fb] Running
	I0429 14:10:27.776650 1903322 system_pods.go:61] "registry-zhb4n" [9abf552b-43fc-4cf4-968b-c3f3be943f93] Running
	I0429 14:10:27.776688 1903322 system_pods.go:61] "snapshot-controller-745499f584-q2bjz" [914ec43f-98bf-4718-9e42-59612fcf4a7b] Running
	I0429 14:10:27.776715 1903322 system_pods.go:61] "snapshot-controller-745499f584-qkwpt" [38e18f20-7e75-42ce-a983-3db45cab9efb] Running
	I0429 14:10:27.776734 1903322 system_pods.go:61] "storage-provisioner" [d4b9907a-2a43-4ebd-971b-85c4ac8c9969] Running
	I0429 14:10:27.776755 1903322 system_pods.go:74] duration metric: took 11.059136577s to wait for pod list to return data ...
	I0429 14:10:27.776776 1903322 default_sa.go:34] waiting for default service account to be created ...
	I0429 14:10:27.779153 1903322 default_sa.go:45] found service account: "default"
	I0429 14:10:27.779179 1903322 default_sa.go:55] duration metric: took 2.368305ms for default service account to be created ...
	I0429 14:10:27.779188 1903322 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 14:10:27.789203 1903322 system_pods.go:86] 18 kube-system pods found
	I0429 14:10:27.789238 1903322 system_pods.go:89] "coredns-7db6d8ff4d-8c59t" [6db81098-176e-4ea8-b78f-36bbcc52095f] Running
	I0429 14:10:27.789245 1903322 system_pods.go:89] "csi-hostpath-attacher-0" [62cdde81-fe62-4de1-817e-071809366cc1] Running
	I0429 14:10:27.789251 1903322 system_pods.go:89] "csi-hostpath-resizer-0" [107e3b64-e1dd-4011-b5b1-dfccb55c7ee4] Running
	I0429 14:10:27.789255 1903322 system_pods.go:89] "csi-hostpathplugin-pdrr9" [e6a7f56a-7b70-452a-980b-3db7b5e261c1] Running
	I0429 14:10:27.789259 1903322 system_pods.go:89] "etcd-addons-457090" [b193ac16-9a2e-4f2c-a710-09df74520cce] Running
	I0429 14:10:27.789265 1903322 system_pods.go:89] "kindnet-tvhsm" [4efdf177-7bfb-4e88-a045-4b64aad67f6a] Running
	I0429 14:10:27.789270 1903322 system_pods.go:89] "kube-apiserver-addons-457090" [5eb570ba-cbb2-4426-8cdd-7d80c357c572] Running
	I0429 14:10:27.789274 1903322 system_pods.go:89] "kube-controller-manager-addons-457090" [8a95a1b5-02f1-407d-8309-30984a5e118b] Running
	I0429 14:10:27.789318 1903322 system_pods.go:89] "kube-ingress-dns-minikube" [757aca19-0d56-4052-975e-6621832dc1b4] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0429 14:10:27.789330 1903322 system_pods.go:89] "kube-proxy-6wf6b" [d2a0a51b-b9e6-4e8c-b402-97a2fc9400ed] Running
	I0429 14:10:27.789336 1903322 system_pods.go:89] "kube-scheduler-addons-457090" [7949e7cb-686e-48b8-a52e-ce65e292de69] Running
	I0429 14:10:27.789344 1903322 system_pods.go:89] "metrics-server-c59844bb4-hltz2" [aedce136-b59d-41a1-83ba-037b4f9e9302] Running
	I0429 14:10:27.789359 1903322 system_pods.go:89] "nvidia-device-plugin-daemonset-b6fbn" [d72d7bb4-220a-44af-9b8f-8b406f53e814] Running
	I0429 14:10:27.789364 1903322 system_pods.go:89] "registry-proxy-96wq6" [1b8f503a-0540-4820-bd92-04b584ad56fb] Running
	I0429 14:10:27.789367 1903322 system_pods.go:89] "registry-zhb4n" [9abf552b-43fc-4cf4-968b-c3f3be943f93] Running
	I0429 14:10:27.789371 1903322 system_pods.go:89] "snapshot-controller-745499f584-q2bjz" [914ec43f-98bf-4718-9e42-59612fcf4a7b] Running
	I0429 14:10:27.789376 1903322 system_pods.go:89] "snapshot-controller-745499f584-qkwpt" [38e18f20-7e75-42ce-a983-3db45cab9efb] Running
	I0429 14:10:27.789478 1903322 system_pods.go:89] "storage-provisioner" [d4b9907a-2a43-4ebd-971b-85c4ac8c9969] Running
	I0429 14:10:27.789495 1903322 system_pods.go:126] duration metric: took 10.300369ms to wait for k8s-apps to be running ...
	I0429 14:10:27.789504 1903322 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 14:10:27.789579 1903322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 14:10:27.802063 1903322 system_svc.go:56] duration metric: took 12.54929ms WaitForService to wait for kubelet
	I0429 14:10:27.802107 1903322 kubeadm.go:576] duration metric: took 2m37.890665339s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 14:10:27.802127 1903322 node_conditions.go:102] verifying NodePressure condition ...
	I0429 14:10:27.805378 1903322 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0429 14:10:27.805412 1903322 node_conditions.go:123] node cpu capacity is 2
	I0429 14:10:27.805425 1903322 node_conditions.go:105] duration metric: took 3.291886ms to run NodePressure ...
	I0429 14:10:27.805438 1903322 start.go:240] waiting for startup goroutines ...
	I0429 14:10:27.805446 1903322 start.go:245] waiting for cluster config update ...
	I0429 14:10:27.805464 1903322 start.go:254] writing updated cluster config ...
	I0429 14:10:27.805764 1903322 ssh_runner.go:195] Run: rm -f paused
	I0429 14:10:28.135023 1903322 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 14:10:28.137397 1903322 out.go:177] * Done! kubectl is now configured to use "addons-457090" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 29 14:14:40 addons-457090 crio[925]: time="2024-04-29 14:14:40.224264052Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=ca266a3a-5305-484e-87c2-9a79c13b211e name=/runtime.v1.ImageService/ImageStatus
	Apr 29 14:14:40 addons-457090 crio[925]: time="2024-04-29 14:14:40.224464108Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=ca266a3a-5305-484e-87c2-9a79c13b211e name=/runtime.v1.ImageService/ImageStatus
	Apr 29 14:14:40 addons-457090 crio[925]: time="2024-04-29 14:14:40.225275352Z" level=info msg="Creating container: default/hello-world-app-86c47465fc-z6j5z/hello-world-app" id=a16cc9e4-c375-417f-a6f2-03ce321d7ec4 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 29 14:14:40 addons-457090 crio[925]: time="2024-04-29 14:14:40.225375651Z" level=warning msg="Allowed annotations are specified for workload []"
	Apr 29 14:14:40 addons-457090 crio[925]: time="2024-04-29 14:14:40.296346463Z" level=info msg="Created container 85fd28a66b11945e073794ebb93a0fe0def80e8759c51a6c47a79f9906f374dd: default/hello-world-app-86c47465fc-z6j5z/hello-world-app" id=a16cc9e4-c375-417f-a6f2-03ce321d7ec4 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 29 14:14:40 addons-457090 crio[925]: time="2024-04-29 14:14:40.297300082Z" level=info msg="Starting container: 85fd28a66b11945e073794ebb93a0fe0def80e8759c51a6c47a79f9906f374dd" id=a7a78696-423b-4758-9c53-8e6f0791e41a name=/runtime.v1.RuntimeService/StartContainer
	Apr 29 14:14:40 addons-457090 crio[925]: time="2024-04-29 14:14:40.305351094Z" level=info msg="Started container" PID=8464 containerID=85fd28a66b11945e073794ebb93a0fe0def80e8759c51a6c47a79f9906f374dd description=default/hello-world-app-86c47465fc-z6j5z/hello-world-app id=a7a78696-423b-4758-9c53-8e6f0791e41a name=/runtime.v1.RuntimeService/StartContainer sandboxID=d30b14e260c4432f2e3b2fa6c2f3d7f91b15831cd41866e89957da7b1387d077
	Apr 29 14:14:40 addons-457090 conmon[8453]: conmon 85fd28a66b11945e0737 <ninfo>: container 8464 exited with status 1
	Apr 29 14:14:40 addons-457090 crio[925]: time="2024-04-29 14:14:40.529306195Z" level=info msg="Stopping container: 2adeec085dd39a94220deb9409e6363374c2b212433c59ad1844cc49e085398d (timeout: 2s)" id=6003b5a6-742f-44b6-8be6-1e704f78eb39 name=/runtime.v1.RuntimeService/StopContainer
	Apr 29 14:14:40 addons-457090 crio[925]: time="2024-04-29 14:14:40.800083191Z" level=info msg="Removing container: 65997e489d6a5d8d60be88daf946cb4465a2f4e736c75581d738540460b3e393" id=395601e2-d2b4-405e-91c6-671bf8fb55bf name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 29 14:14:40 addons-457090 crio[925]: time="2024-04-29 14:14:40.824462440Z" level=info msg="Removed container 65997e489d6a5d8d60be88daf946cb4465a2f4e736c75581d738540460b3e393: default/hello-world-app-86c47465fc-z6j5z/hello-world-app" id=395601e2-d2b4-405e-91c6-671bf8fb55bf name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 29 14:14:42 addons-457090 crio[925]: time="2024-04-29 14:14:42.536467183Z" level=warning msg="Stopping container 2adeec085dd39a94220deb9409e6363374c2b212433c59ad1844cc49e085398d with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=6003b5a6-742f-44b6-8be6-1e704f78eb39 name=/runtime.v1.RuntimeService/StopContainer
	Apr 29 14:14:42 addons-457090 conmon[4694]: conmon 2adeec085dd39a94220d <ninfo>: container 4705 exited with status 137
	Apr 29 14:14:42 addons-457090 crio[925]: time="2024-04-29 14:14:42.673596618Z" level=info msg="Stopped container 2adeec085dd39a94220deb9409e6363374c2b212433c59ad1844cc49e085398d: ingress-nginx/ingress-nginx-controller-84df5799c-qkqpq/controller" id=6003b5a6-742f-44b6-8be6-1e704f78eb39 name=/runtime.v1.RuntimeService/StopContainer
	Apr 29 14:14:42 addons-457090 crio[925]: time="2024-04-29 14:14:42.674142641Z" level=info msg="Stopping pod sandbox: 82e18be3465768e8ae0f2b00dcdfc6940813ee2f02fc7e41eae4ce5a7b1e7571" id=f1476e64-1534-4cc2-82a2-85460a334994 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 29 14:14:42 addons-457090 crio[925]: time="2024-04-29 14:14:42.677207615Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-7WLLD7WEUT4JEF23 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-WBTZQ4SLXB5ZZXEW - [0:0]\n-X KUBE-HP-WBTZQ4SLXB5ZZXEW\n-X KUBE-HP-7WLLD7WEUT4JEF23\nCOMMIT\n"
	Apr 29 14:14:42 addons-457090 crio[925]: time="2024-04-29 14:14:42.678505175Z" level=info msg="Closing host port tcp:80"
	Apr 29 14:14:42 addons-457090 crio[925]: time="2024-04-29 14:14:42.678553962Z" level=info msg="Closing host port tcp:443"
	Apr 29 14:14:42 addons-457090 crio[925]: time="2024-04-29 14:14:42.679780080Z" level=info msg="Host port tcp:80 does not have an open socket"
	Apr 29 14:14:42 addons-457090 crio[925]: time="2024-04-29 14:14:42.679808633Z" level=info msg="Host port tcp:443 does not have an open socket"
	Apr 29 14:14:42 addons-457090 crio[925]: time="2024-04-29 14:14:42.679976165Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-84df5799c-qkqpq Namespace:ingress-nginx ID:82e18be3465768e8ae0f2b00dcdfc6940813ee2f02fc7e41eae4ce5a7b1e7571 UID:a543ff14-53a0-4c1b-9db4-b3f9eef88d6c NetNS:/var/run/netns/6bcdf88f-ca85-4340-a724-fd670c395c33 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Apr 29 14:14:42 addons-457090 crio[925]: time="2024-04-29 14:14:42.680162404Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-84df5799c-qkqpq from CNI network \"kindnet\" (type=ptp)"
	Apr 29 14:14:42 addons-457090 crio[925]: time="2024-04-29 14:14:42.706815034Z" level=info msg="Stopped pod sandbox: 82e18be3465768e8ae0f2b00dcdfc6940813ee2f02fc7e41eae4ce5a7b1e7571" id=f1476e64-1534-4cc2-82a2-85460a334994 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 29 14:14:42 addons-457090 crio[925]: time="2024-04-29 14:14:42.806094573Z" level=info msg="Removing container: 2adeec085dd39a94220deb9409e6363374c2b212433c59ad1844cc49e085398d" id=1d754967-3900-44ee-8608-0e3aa83dd6cb name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 29 14:14:42 addons-457090 crio[925]: time="2024-04-29 14:14:42.821932188Z" level=info msg="Removed container 2adeec085dd39a94220deb9409e6363374c2b212433c59ad1844cc49e085398d: ingress-nginx/ingress-nginx-controller-84df5799c-qkqpq/controller" id=1d754967-3900-44ee-8608-0e3aa83dd6cb name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	85fd28a66b119       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             7 seconds ago       Exited              hello-world-app           2                   d30b14e260c44       hello-world-app-86c47465fc-z6j5z
	6fa1fc1423c4b       docker.io/library/nginx@sha256:1f37baf7373d386ee9de0437325ae3e0202a3959803fd79144fa0bb27e2b2801                              2 minutes ago       Running             nginx                     0                   ceb339f77d791       nginx
	c82ac5fdba3be       ghcr.io/headlamp-k8s/headlamp@sha256:1f277f42730106526a27560517a4c5f9253ccb2477be458986f44a791158a02c                        3 minutes ago       Running             headlamp                  0                   e2f2663794a20       headlamp-7559bf459f-2zx6r
	01984734ff3ae       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 5 minutes ago       Running             gcp-auth                  0                   c2b5b232a09e7       gcp-auth-5db96cd9b4-kc2nb
	3e92c4f46c7f9       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              5 minutes ago       Running             yakd                      0                   ecd6a09b31bd4       yakd-dashboard-5ddbf7d777-w8n26
	2c1b29602737a       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70        6 minutes ago       Running             metrics-server            0                   654efe9363e2c       metrics-server-c59844bb4-hltz2
	b2e5cb6f195b2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0b1098ef00acee905f9736f98dd151af0a38d0fef0ccf9fb5ad189b20933e5f8   6 minutes ago       Exited              patch                     0                   3025657f9ee85       ingress-nginx-admission-patch-4hl58
	cde78b6418f3e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0b1098ef00acee905f9736f98dd151af0a38d0fef0ccf9fb5ad189b20933e5f8   6 minutes ago       Exited              create                    0                   77e5e795376a4       ingress-nginx-admission-create-62jpk
	a81321961a1d8       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             6 minutes ago       Running             coredns                   0                   d8ae2e591bdd7       coredns-7db6d8ff4d-8c59t
	19a13a7429ba0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             6 minutes ago       Running             storage-provisioner       0                   ec5f74e8a7ef7       storage-provisioner
	99b8b7a1eee2f       cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f                                                             6 minutes ago       Running             kube-proxy                0                   040bbb1a390c0       kube-proxy-6wf6b
	0229dc76b7d0d       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d                                                             6 minutes ago       Running             kindnet-cni               0                   bf59d2ef2bcd4       kindnet-tvhsm
	dab35c23ea406       547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a                                                             7 minutes ago       Running             kube-scheduler            0                   444913ef40c4f       kube-scheduler-addons-457090
	99e1db8ae8156       68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1                                                             7 minutes ago       Running             kube-controller-manager   0                   a205372cd9d97       kube-controller-manager-addons-457090
	8d4c3f49a1645       181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb                                                             7 minutes ago       Running             kube-apiserver            0                   49dd90a3c25e0       kube-apiserver-addons-457090
	3401a97b7bbcb       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                             7 minutes ago       Running             etcd                      0                   cc6531ddd2c59       etcd-addons-457090
	
	
	==> coredns [a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5] <==
	[INFO] 10.244.0.20:36330 - 52850 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000064024s
	[INFO] 10.244.0.20:36330 - 48400 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000075552s
	[INFO] 10.244.0.20:36330 - 45816 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000172972s
	[INFO] 10.244.0.20:36330 - 44843 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000156299s
	[INFO] 10.244.0.20:36330 - 7732 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002661908s
	[INFO] 10.244.0.20:36330 - 1424 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001547741s
	[INFO] 10.244.0.20:36330 - 20865 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075068s
	[INFO] 10.244.0.20:54365 - 22631 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000315551s
	[INFO] 10.244.0.20:57667 - 61744 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000076529s
	[INFO] 10.244.0.20:57667 - 24351 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000084709s
	[INFO] 10.244.0.20:57667 - 52193 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00007076s
	[INFO] 10.244.0.20:54365 - 33470 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000041485s
	[INFO] 10.244.0.20:54365 - 31504 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000239416s
	[INFO] 10.244.0.20:57667 - 10310 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037867s
	[INFO] 10.244.0.20:54365 - 17753 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035569s
	[INFO] 10.244.0.20:54365 - 5323 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050691s
	[INFO] 10.244.0.20:54365 - 26961 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043545s
	[INFO] 10.244.0.20:57667 - 943 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044045s
	[INFO] 10.244.0.20:57667 - 22171 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045226s
	[INFO] 10.244.0.20:54365 - 19060 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002191552s
	[INFO] 10.244.0.20:57667 - 50546 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002016694s
	[INFO] 10.244.0.20:54365 - 59551 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001323922s
	[INFO] 10.244.0.20:57667 - 43293 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001405137s
	[INFO] 10.244.0.20:57667 - 8542 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049936s
	[INFO] 10.244.0.20:54365 - 3578 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000167991s
	
	
	==> describe nodes <==
	Name:               addons-457090
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-457090
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=94a01df0d4b48636d4af3d06a53be687e06c0844
	                    minikube.k8s.io/name=addons-457090
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T14_07_38_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-457090
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 14:07:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-457090
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 14:14:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 14:14:47 +0000   Mon, 29 Apr 2024 14:07:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 14:14:47 +0000   Mon, 29 Apr 2024 14:07:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 14:14:47 +0000   Mon, 29 Apr 2024 14:07:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 14:14:47 +0000   Mon, 29 Apr 2024 14:08:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-457090
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 a89a0467952a46398c09ced7a4180db6
	  System UUID:                e60b6db6-cc0a-43d1-8947-017d88d6eca3
	  Boot ID:                    b8f2360a-0b19-4e04-aa8c-604719eae8f1
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-z6j5z         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  gcp-auth                    gcp-auth-5db96cd9b4-kc2nb                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m46s
	  headlamp                    headlamp-7559bf459f-2zx6r                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	  kube-system                 coredns-7db6d8ff4d-8c59t                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m55s
	  kube-system                 etcd-addons-457090                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         7m10s
	  kube-system                 kindnet-tvhsm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m55s
	  kube-system                 kube-apiserver-addons-457090             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	  kube-system                 kube-controller-manager-addons-457090    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  kube-system                 kube-proxy-6wf6b                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	  kube-system                 kube-scheduler-addons-457090             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	  kube-system                 metrics-server-c59844bb4-hltz2           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m53s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m53s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-w8n26          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     6m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             548Mi (6%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m52s                  kube-proxy       
	  Normal  Starting                 7m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m18s (x8 over 7m18s)  kubelet          Node addons-457090 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m18s (x8 over 7m18s)  kubelet          Node addons-457090 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m18s (x8 over 7m18s)  kubelet          Node addons-457090 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m10s                  kubelet          Node addons-457090 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m10s                  kubelet          Node addons-457090 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m10s                  kubelet          Node addons-457090 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m58s                  node-controller  Node addons-457090 event: Registered Node addons-457090 in Controller
	  Normal  NodeReady                6m22s                  kubelet          Node addons-457090 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001061] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=00000000e2172674
	[  +0.001121] FS-Cache: O-key=[8] 'd5425c0100000000'
	[  +0.000718] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001013] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000c61f21fc
	[  +0.001056] FS-Cache: N-key=[8] 'd5425c0100000000'
	[  +2.201464] FS-Cache: Duplicate cookie detected
	[  +0.000788] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001073] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=00000000160cdc65
	[  +0.001055] FS-Cache: O-key=[8] 'd4425c0100000000'
	[  +0.000702] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000925] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000ec0b8a4d
	[  +0.001126] FS-Cache: N-key=[8] 'd4425c0100000000'
	[  +0.396125] FS-Cache: Duplicate cookie detected
	[  +0.000758] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.000978] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=000000003014340c
	[  +0.001111] FS-Cache: O-key=[8] 'da425c0100000000'
	[  +0.000783] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000926] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000053a5fe1
	[  +0.001072] FS-Cache: N-key=[8] 'da425c0100000000'
	[Apr29 13:39] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +48.347025] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.006466] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.002188] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.173561] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	
	
	==> etcd [3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c] <==
	{"level":"info","ts":"2024-04-29T14:07:31.25515Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T14:07:31.266252Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-04-29T14:07:31.276718Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T14:07:31.276818Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T14:07:31.276886Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T14:07:31.276972Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T14:07:31.277025Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-04-29T14:07:53.088986Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.638032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T14:07:53.096321Z","caller":"traceutil/trace.go:171","msg":"trace[184047524] range","detail":"{range_begin:/registry/controllers/kube-system/registry; range_end:; response_count:0; response_revision:375; }","duration":"146.972854ms","start":"2024-04-29T14:07:52.949324Z","end":"2024-04-29T14:07:53.096297Z","steps":["trace[184047524] 'agreement among raft nodes before linearized reading'  (duration: 85.702981ms)","trace[184047524] 'range keys from in-memory index tree'  (duration: 53.922447ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T14:07:53.097398Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.830252ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T14:07:53.101108Z","caller":"traceutil/trace.go:171","msg":"trace[104445180] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:377; }","duration":"151.543994ms","start":"2024-04-29T14:07:52.949549Z","end":"2024-04-29T14:07:53.101093Z","steps":["trace[104445180] 'agreement among raft nodes before linearized reading'  (duration: 147.764685ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T14:07:53.101157Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.628309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T14:07:53.123323Z","caller":"traceutil/trace.go:171","msg":"trace[133450843] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:377; }","duration":"173.793689ms","start":"2024-04-29T14:07:52.949414Z","end":"2024-04-29T14:07:53.123208Z","steps":["trace[133450843] 'agreement among raft nodes before linearized reading'  (duration: 151.591386ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T14:07:53.602269Z","caller":"traceutil/trace.go:171","msg":"trace[1328106099] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"151.363983ms","start":"2024-04-29T14:07:53.450886Z","end":"2024-04-29T14:07:53.60225Z","steps":["trace[1328106099] 'process raft request'  (duration: 92.474739ms)","trace[1328106099] 'compare'  (duration: 57.453774ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T14:07:53.602495Z","caller":"traceutil/trace.go:171","msg":"trace[410746466] linearizableReadLoop","detail":"{readStateIndex:411; appliedIndex:410; }","duration":"151.31307ms","start":"2024-04-29T14:07:53.451173Z","end":"2024-04-29T14:07:53.602486Z","steps":["trace[410746466] 'read index received'  (duration: 91.752142ms)","trace[410746466] 'applied index is now lower than readState.Index'  (duration: 59.560001ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T14:07:53.60262Z","caller":"traceutil/trace.go:171","msg":"trace[674151775] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"125.974623ms","start":"2024-04-29T14:07:53.476639Z","end":"2024-04-29T14:07:53.602613Z","steps":["trace[674151775] 'process raft request'  (duration: 124.690052ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T14:07:53.602782Z","caller":"traceutil/trace.go:171","msg":"trace[789335460] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"109.297005ms","start":"2024-04-29T14:07:53.493479Z","end":"2024-04-29T14:07:53.602776Z","steps":["trace[789335460] 'process raft request'  (duration: 107.903101ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T14:07:53.603095Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.906478ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/\" range_end:\"/registry/serviceaccounts/kube-system0\" ","response":"range_response_count:40 size:9600"}
	{"level":"warn","ts":"2024-04-29T14:07:53.623635Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.504039ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/default/\" range_end:\"/registry/limitranges/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T14:07:53.628829Z","caller":"traceutil/trace.go:171","msg":"trace[1394273420] range","detail":"{range_begin:/registry/limitranges/default/; range_end:/registry/limitranges/default0; response_count:0; response_revision:403; }","duration":"135.702096ms","start":"2024-04-29T14:07:53.493107Z","end":"2024-04-29T14:07:53.628809Z","steps":["trace[1394273420] 'agreement among raft nodes before linearized reading'  (duration: 130.462702ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T14:07:53.629562Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.865935ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/yakd-dashboard/\" range_end:\"/registry/resourcequotas/yakd-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T14:07:53.629678Z","caller":"traceutil/trace.go:171","msg":"trace[1752055843] range","detail":"{range_begin:/registry/resourcequotas/yakd-dashboard/; range_end:/registry/resourcequotas/yakd-dashboard0; response_count:0; response_revision:403; }","duration":"135.987264ms","start":"2024-04-29T14:07:53.493681Z","end":"2024-04-29T14:07:53.629668Z","steps":["trace[1752055843] 'agreement among raft nodes before linearized reading'  (duration: 135.85316ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T14:07:53.629868Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.642571ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/local-path-storage/\" range_end:\"/registry/resourcequotas/local-path-storage0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T14:07:53.629953Z","caller":"traceutil/trace.go:171","msg":"trace[1039367830] range","detail":"{range_begin:/registry/resourcequotas/local-path-storage/; range_end:/registry/resourcequotas/local-path-storage0; response_count:0; response_revision:403; }","duration":"136.728527ms","start":"2024-04-29T14:07:53.493216Z","end":"2024-04-29T14:07:53.629945Z","steps":["trace[1039367830] 'agreement among raft nodes before linearized reading'  (duration: 136.629656ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T14:07:53.631635Z","caller":"traceutil/trace.go:171","msg":"trace[261754142] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/; range_end:/registry/serviceaccounts/kube-system0; response_count:40; response_revision:403; }","duration":"155.802315ms","start":"2024-04-29T14:07:53.451168Z","end":"2024-04-29T14:07:53.60697Z","steps":["trace[261754142] 'agreement among raft nodes before linearized reading'  (duration: 151.743722ms)"],"step_count":1}
	
	
	==> gcp-auth [01984734ff3aed05b66196108f486b91a04021d4f9ebe8252f25c35963b06009] <==
	2024/04/29 14:09:13 GCP Auth Webhook started!
	2024/04/29 14:10:40 Ready to marshal response ...
	2024/04/29 14:10:40 Ready to write response ...
	2024/04/29 14:10:41 Ready to marshal response ...
	2024/04/29 14:10:41 Ready to write response ...
	2024/04/29 14:10:57 Ready to marshal response ...
	2024/04/29 14:10:57 Ready to write response ...
	2024/04/29 14:10:57 Ready to marshal response ...
	2024/04/29 14:10:57 Ready to write response ...
	2024/04/29 14:11:03 Ready to marshal response ...
	2024/04/29 14:11:03 Ready to write response ...
	2024/04/29 14:11:07 Ready to marshal response ...
	2024/04/29 14:11:07 Ready to write response ...
	2024/04/29 14:11:27 Ready to marshal response ...
	2024/04/29 14:11:27 Ready to write response ...
	2024/04/29 14:11:27 Ready to marshal response ...
	2024/04/29 14:11:27 Ready to write response ...
	2024/04/29 14:11:27 Ready to marshal response ...
	2024/04/29 14:11:27 Ready to write response ...
	2024/04/29 14:12:02 Ready to marshal response ...
	2024/04/29 14:12:02 Ready to write response ...
	2024/04/29 14:14:22 Ready to marshal response ...
	2024/04/29 14:14:22 Ready to write response ...
	
	
	==> kernel <==
	 14:14:48 up  9:57,  0 users,  load average: 0.57, 1.36, 2.32
	Linux addons-457090 5.15.0-1058-aws #64~20.04.1-Ubuntu SMP Tue Apr 9 11:11:55 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc] <==
	I0429 14:12:45.850378       1 main.go:227] handling current node
	I0429 14:12:55.854433       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:12:55.854463       1 main.go:227] handling current node
	I0429 14:13:05.866336       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:13:05.866365       1 main.go:227] handling current node
	I0429 14:13:15.878853       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:13:15.878882       1 main.go:227] handling current node
	I0429 14:13:25.886983       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:13:25.887012       1 main.go:227] handling current node
	I0429 14:13:35.897117       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:13:35.897145       1 main.go:227] handling current node
	I0429 14:13:45.908328       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:13:45.908355       1 main.go:227] handling current node
	I0429 14:13:55.912610       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:13:55.912645       1 main.go:227] handling current node
	I0429 14:14:05.917648       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:14:05.917677       1 main.go:227] handling current node
	I0429 14:14:15.929949       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:14:15.929976       1 main.go:227] handling current node
	I0429 14:14:25.944407       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:14:25.944442       1 main.go:227] handling current node
	I0429 14:14:35.956760       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:14:35.956785       1 main.go:227] handling current node
	I0429 14:14:45.965877       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:14:45.965903       1 main.go:227] handling current node
	
	
	==> kube-apiserver [8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 14:09:54.312782       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0429 14:10:54.774077       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0429 14:11:08.435800       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0429 14:11:08.448826       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0429 14:11:08.459091       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0429 14:11:14.366255       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	I0429 14:11:20.921469       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 14:11:20.921604       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 14:11:21.015696       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 14:11:21.015847       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 14:11:21.040179       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 14:11:21.040301       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 14:11:21.077320       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 14:11:21.077448       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0429 14:11:22.040555       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0429 14:11:22.078088       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0429 14:11:22.091693       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0429 14:11:23.461019       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0429 14:11:27.643482       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.18.186"}
	I0429 14:11:56.288499       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0429 14:11:57.342692       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0429 14:12:01.831412       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0429 14:12:02.140477       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.142.127"}
	I0429 14:14:22.692277       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.152.50"}
	
	
	==> kube-controller-manager [99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034] <==
	W0429 14:13:27.019861       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 14:13:27.019904       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 14:13:28.449802       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 14:13:28.449842       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 14:14:05.859604       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 14:14:05.859645       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 14:14:05.927765       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 14:14:05.927804       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 14:14:14.668278       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 14:14:14.668314       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 14:14:19.758776       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 14:14:19.758812       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0429 14:14:22.502357       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="53.650859ms"
	I0429 14:14:22.525865       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="23.454119ms"
	I0429 14:14:22.526040       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="40.935µs"
	I0429 14:14:22.539300       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="40.763µs"
	I0429 14:14:26.785614       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="34.577µs"
	I0429 14:14:27.786850       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="51.528µs"
	I0429 14:14:28.782391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="37.702µs"
	I0429 14:14:39.483543       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0429 14:14:39.489247       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="9.206µs"
	I0429 14:14:39.501109       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0429 14:14:40.819665       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="39.072µs"
	W0429 14:14:44.345437       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 14:14:44.345562       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67] <==
	I0429 14:07:55.453455       1 server_linux.go:69] "Using iptables proxy"
	I0429 14:07:55.577403       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0429 14:07:55.685651       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0429 14:07:55.685873       1 server_linux.go:165] "Using iptables Proxier"
	I0429 14:07:55.689174       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0429 14:07:55.689205       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0429 14:07:55.689227       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 14:07:55.689419       1 server.go:872] "Version info" version="v1.30.0"
	I0429 14:07:55.689441       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 14:07:55.690642       1 config.go:192] "Starting service config controller"
	I0429 14:07:55.690662       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 14:07:55.690689       1 config.go:101] "Starting endpoint slice config controller"
	I0429 14:07:55.690701       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 14:07:55.691147       1 config.go:319] "Starting node config controller"
	I0429 14:07:55.691164       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 14:07:55.793700       1 shared_informer.go:320] Caches are synced for service config
	I0429 14:07:55.793769       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 14:07:55.792340       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93] <==
	I0429 14:07:33.280339       1 serving.go:380] Generated self-signed cert in-memory
	W0429 14:07:35.867108       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0429 14:07:35.867151       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 14:07:35.867161       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 14:07:35.867168       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 14:07:35.902939       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 14:07:35.908733       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 14:07:35.913009       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 14:07:35.913536       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 14:07:35.920729       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 14:07:35.913556       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0429 14:07:35.925058       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 14:07:35.925167       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0429 14:07:37.021927       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 14:14:28 addons-457090 kubelet[1499]: E0429 14:14:28.770064    1499 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-z6j5z_default(7e0954b1-1197-4cd7-85c3-d989d0d8799d)\"" pod="default/hello-world-app-86c47465fc-z6j5z" podUID="7e0954b1-1197-4cd7-85c3-d989d0d8799d"
	Apr 29 14:14:34 addons-457090 kubelet[1499]: I0429 14:14:34.222265    1499 scope.go:117] "RemoveContainer" containerID="bd2c54caebfcf16ba9cae5f965ab797bab405d0f41c02efc4014a0b6cb4a0ad7"
	Apr 29 14:14:34 addons-457090 kubelet[1499]: E0429 14:14:34.222544    1499 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(757aca19-0d56-4052-975e-6621832dc1b4)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="757aca19-0d56-4052-975e-6621832dc1b4"
	Apr 29 14:14:38 addons-457090 kubelet[1499]: I0429 14:14:38.561520    1499 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jkbm\" (UniqueName: \"kubernetes.io/projected/757aca19-0d56-4052-975e-6621832dc1b4-kube-api-access-9jkbm\") pod \"757aca19-0d56-4052-975e-6621832dc1b4\" (UID: \"757aca19-0d56-4052-975e-6621832dc1b4\") "
	Apr 29 14:14:38 addons-457090 kubelet[1499]: I0429 14:14:38.566343    1499 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/757aca19-0d56-4052-975e-6621832dc1b4-kube-api-access-9jkbm" (OuterVolumeSpecName: "kube-api-access-9jkbm") pod "757aca19-0d56-4052-975e-6621832dc1b4" (UID: "757aca19-0d56-4052-975e-6621832dc1b4"). InnerVolumeSpecName "kube-api-access-9jkbm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 29 14:14:38 addons-457090 kubelet[1499]: I0429 14:14:38.662207    1499 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9jkbm\" (UniqueName: \"kubernetes.io/projected/757aca19-0d56-4052-975e-6621832dc1b4-kube-api-access-9jkbm\") on node \"addons-457090\" DevicePath \"\""
	Apr 29 14:14:38 addons-457090 kubelet[1499]: I0429 14:14:38.791618    1499 scope.go:117] "RemoveContainer" containerID="bd2c54caebfcf16ba9cae5f965ab797bab405d0f41c02efc4014a0b6cb4a0ad7"
	Apr 29 14:14:39 addons-457090 kubelet[1499]: I0429 14:14:39.224033    1499 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="757aca19-0d56-4052-975e-6621832dc1b4" path="/var/lib/kubelet/pods/757aca19-0d56-4052-975e-6621832dc1b4/volumes"
	Apr 29 14:14:40 addons-457090 kubelet[1499]: I0429 14:14:40.222616    1499 scope.go:117] "RemoveContainer" containerID="65997e489d6a5d8d60be88daf946cb4465a2f4e736c75581d738540460b3e393"
	Apr 29 14:14:40 addons-457090 kubelet[1499]: I0429 14:14:40.798205    1499 scope.go:117] "RemoveContainer" containerID="65997e489d6a5d8d60be88daf946cb4465a2f4e736c75581d738540460b3e393"
	Apr 29 14:14:40 addons-457090 kubelet[1499]: I0429 14:14:40.798466    1499 scope.go:117] "RemoveContainer" containerID="85fd28a66b11945e073794ebb93a0fe0def80e8759c51a6c47a79f9906f374dd"
	Apr 29 14:14:40 addons-457090 kubelet[1499]: E0429 14:14:40.798723    1499 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-z6j5z_default(7e0954b1-1197-4cd7-85c3-d989d0d8799d)\"" pod="default/hello-world-app-86c47465fc-z6j5z" podUID="7e0954b1-1197-4cd7-85c3-d989d0d8799d"
	Apr 29 14:14:41 addons-457090 kubelet[1499]: I0429 14:14:41.223712    1499 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28f6f2cd-f4a3-4761-9d5e-78f388d3899b" path="/var/lib/kubelet/pods/28f6f2cd-f4a3-4761-9d5e-78f388d3899b/volumes"
	Apr 29 14:14:41 addons-457090 kubelet[1499]: I0429 14:14:41.224136    1499 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bb08cb4-99d9-466e-9aa1-cbe856913467" path="/var/lib/kubelet/pods/9bb08cb4-99d9-466e-9aa1-cbe856913467/volumes"
	Apr 29 14:14:42 addons-457090 kubelet[1499]: I0429 14:14:42.792169    1499 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6855\" (UniqueName: \"kubernetes.io/projected/a543ff14-53a0-4c1b-9db4-b3f9eef88d6c-kube-api-access-x6855\") pod \"a543ff14-53a0-4c1b-9db4-b3f9eef88d6c\" (UID: \"a543ff14-53a0-4c1b-9db4-b3f9eef88d6c\") "
	Apr 29 14:14:42 addons-457090 kubelet[1499]: I0429 14:14:42.792232    1499 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a543ff14-53a0-4c1b-9db4-b3f9eef88d6c-webhook-cert\") pod \"a543ff14-53a0-4c1b-9db4-b3f9eef88d6c\" (UID: \"a543ff14-53a0-4c1b-9db4-b3f9eef88d6c\") "
	Apr 29 14:14:42 addons-457090 kubelet[1499]: I0429 14:14:42.794611    1499 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a543ff14-53a0-4c1b-9db4-b3f9eef88d6c-kube-api-access-x6855" (OuterVolumeSpecName: "kube-api-access-x6855") pod "a543ff14-53a0-4c1b-9db4-b3f9eef88d6c" (UID: "a543ff14-53a0-4c1b-9db4-b3f9eef88d6c"). InnerVolumeSpecName "kube-api-access-x6855". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 29 14:14:42 addons-457090 kubelet[1499]: I0429 14:14:42.797437    1499 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a543ff14-53a0-4c1b-9db4-b3f9eef88d6c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a543ff14-53a0-4c1b-9db4-b3f9eef88d6c" (UID: "a543ff14-53a0-4c1b-9db4-b3f9eef88d6c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 29 14:14:42 addons-457090 kubelet[1499]: I0429 14:14:42.804953    1499 scope.go:117] "RemoveContainer" containerID="2adeec085dd39a94220deb9409e6363374c2b212433c59ad1844cc49e085398d"
	Apr 29 14:14:42 addons-457090 kubelet[1499]: I0429 14:14:42.822217    1499 scope.go:117] "RemoveContainer" containerID="2adeec085dd39a94220deb9409e6363374c2b212433c59ad1844cc49e085398d"
	Apr 29 14:14:42 addons-457090 kubelet[1499]: E0429 14:14:42.822675    1499 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2adeec085dd39a94220deb9409e6363374c2b212433c59ad1844cc49e085398d\": container with ID starting with 2adeec085dd39a94220deb9409e6363374c2b212433c59ad1844cc49e085398d not found: ID does not exist" containerID="2adeec085dd39a94220deb9409e6363374c2b212433c59ad1844cc49e085398d"
	Apr 29 14:14:42 addons-457090 kubelet[1499]: I0429 14:14:42.822716    1499 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2adeec085dd39a94220deb9409e6363374c2b212433c59ad1844cc49e085398d"} err="failed to get container status \"2adeec085dd39a94220deb9409e6363374c2b212433c59ad1844cc49e085398d\": rpc error: code = NotFound desc = could not find container \"2adeec085dd39a94220deb9409e6363374c2b212433c59ad1844cc49e085398d\": container with ID starting with 2adeec085dd39a94220deb9409e6363374c2b212433c59ad1844cc49e085398d not found: ID does not exist"
	Apr 29 14:14:42 addons-457090 kubelet[1499]: I0429 14:14:42.893360    1499 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a543ff14-53a0-4c1b-9db4-b3f9eef88d6c-webhook-cert\") on node \"addons-457090\" DevicePath \"\""
	Apr 29 14:14:42 addons-457090 kubelet[1499]: I0429 14:14:42.893408    1499 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-x6855\" (UniqueName: \"kubernetes.io/projected/a543ff14-53a0-4c1b-9db4-b3f9eef88d6c-kube-api-access-x6855\") on node \"addons-457090\" DevicePath \"\""
	Apr 29 14:14:43 addons-457090 kubelet[1499]: I0429 14:14:43.224281    1499 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a543ff14-53a0-4c1b-9db4-b3f9eef88d6c" path="/var/lib/kubelet/pods/a543ff14-53a0-4c1b-9db4-b3f9eef88d6c/volumes"
	
	
	==> storage-provisioner [19a13a7429ba0c21152b3811e3da57e53205b758fcacfeaccaab942065bd5b8b] <==
	I0429 14:08:27.081886       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 14:08:27.097405       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 14:08:27.097609       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 14:08:27.110477       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 14:08:27.111553       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-457090_54ebab16-af77-4994-b425-0fe6282ae3f2!
	I0429 14:08:27.111766       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9d103883-fb0a-4686-8739-74a01b7285ce", APIVersion:"v1", ResourceVersion:"946", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-457090_54ebab16-af77-4994-b425-0fe6282ae3f2 became leader
	I0429 14:08:27.211716       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-457090_54ebab16-af77-4994-b425-0fe6282ae3f2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-457090 -n addons-457090
helpers_test.go:261: (dbg) Run:  kubectl --context addons-457090 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (167.42s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (347.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 2.444441ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-hltz2" [aedce136-b59d-41a1-83ba-037b4f9e9302] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004390284s
addons_test.go:415: (dbg) Run:  kubectl --context addons-457090 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-457090 top pods -n kube-system: exit status 1 (88.253193ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-8c59t, age: 3m50.806036904s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-457090 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-457090 top pods -n kube-system: exit status 1 (85.929088ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-8c59t, age: 3m53.908867063s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-457090 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-457090 top pods -n kube-system: exit status 1 (88.208287ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-8c59t, age: 3m57.999525947s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-457090 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-457090 top pods -n kube-system: exit status 1 (84.791274ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-8c59t, age: 4m2.974053602s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-457090 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-457090 top pods -n kube-system: exit status 1 (105.703263ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-8c59t, age: 4m11.048358479s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-457090 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-457090 top pods -n kube-system: exit status 1 (90.925292ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-8c59t, age: 4m19.703582213s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-457090 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-457090 top pods -n kube-system: exit status 1 (82.127553ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-8c59t, age: 4m51.716456166s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-457090 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-457090 top pods -n kube-system: exit status 1 (103.732238ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-8c59t, age: 5m23.015057796s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-457090 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-457090 top pods -n kube-system: exit status 1 (92.915417ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-8c59t, age: 6m29.391336774s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-457090 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-457090 top pods -n kube-system: exit status 1 (88.708239ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-8c59t, age: 7m19.147159123s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-457090 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-457090 top pods -n kube-system: exit status 1 (84.16809ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-8c59t, age: 8m5.078316547s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-457090 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-457090 top pods -n kube-system: exit status 1 (93.93183ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-8c59t, age: 9m30.499958132s

                                                
                                                
** /stderr **
addons_test.go:429: failed checking metric server: exit status 1
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-457090 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-457090
helpers_test.go:235: (dbg) docker inspect addons-457090:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bd12d3ace1bb99c97c85534c8adee0c896b84d4633e8fc4f8238ef3baef89283",
	        "Created": "2024-04-29T14:07:15.493652234Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1903788,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-29T14:07:15.817386752Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c9315e0f61546d7b9630cf89252fa7f614fc966830e816cca5333df5c944376f",
	        "ResolvConfPath": "/var/lib/docker/containers/bd12d3ace1bb99c97c85534c8adee0c896b84d4633e8fc4f8238ef3baef89283/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bd12d3ace1bb99c97c85534c8adee0c896b84d4633e8fc4f8238ef3baef89283/hostname",
	        "HostsPath": "/var/lib/docker/containers/bd12d3ace1bb99c97c85534c8adee0c896b84d4633e8fc4f8238ef3baef89283/hosts",
	        "LogPath": "/var/lib/docker/containers/bd12d3ace1bb99c97c85534c8adee0c896b84d4633e8fc4f8238ef3baef89283/bd12d3ace1bb99c97c85534c8adee0c896b84d4633e8fc4f8238ef3baef89283-json.log",
	        "Name": "/addons-457090",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-457090:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-457090",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/811dc4453d69936d5896874dd5f4e4478c0e9e73b97f44bd0e82eb46ac761c9c-init/diff:/var/lib/docker/overlay2/f080d6ed1efba2dbfce916f4260b407bf4d9204079d2708eb1c14f6847e489ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/811dc4453d69936d5896874dd5f4e4478c0e9e73b97f44bd0e82eb46ac761c9c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/811dc4453d69936d5896874dd5f4e4478c0e9e73b97f44bd0e82eb46ac761c9c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/811dc4453d69936d5896874dd5f4e4478c0e9e73b97f44bd0e82eb46ac761c9c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-457090",
	                "Source": "/var/lib/docker/volumes/addons-457090/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-457090",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-457090",
	                "name.minikube.sigs.k8s.io": "addons-457090",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "079ddbe097cb5488e31811a4f7eaae32442e92a52f31f1ade40b3f25af515dcd",
	            "SandboxKey": "/var/run/docker/netns/079ddbe097cb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35042"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35041"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35038"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35040"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35039"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-457090": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "51179890997c9a35c5370f94d300b54c7cfc97355ada9f1fe12d84336c5bf2eb",
	                    "EndpointID": "24ec836fd989eee17d7df21cca7817d54bc7ed86503c52745596c1a4a655b584",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-457090",
	                        "bd12d3ace1bb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-457090 -n addons-457090
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-457090 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-457090 logs -n 25: (1.524216044s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-605899                                                                     | download-only-605899   | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC | 29 Apr 24 14:06 UTC |
	| delete  | -p download-only-668091                                                                     | download-only-668091   | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC | 29 Apr 24 14:06 UTC |
	| delete  | -p download-only-605899                                                                     | download-only-605899   | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC | 29 Apr 24 14:06 UTC |
	| start   | --download-only -p                                                                          | download-docker-259064 | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC |                     |
	|         | download-docker-259064                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-259064                                                                   | download-docker-259064 | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC | 29 Apr 24 14:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-349287   | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC |                     |
	|         | binary-mirror-349287                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36983                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-349287                                                                     | binary-mirror-349287   | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC | 29 Apr 24 14:06 UTC |
	| addons  | enable dashboard -p                                                                         | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC |                     |
	|         | addons-457090                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC |                     |
	|         | addons-457090                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-457090 --wait=true                                                                | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC | 29 Apr 24 14:10 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-457090 ip                                                                            | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:10 UTC | 29 Apr 24 14:10 UTC |
	| addons  | addons-457090 addons disable                                                                | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:10 UTC | 29 Apr 24 14:10 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:10 UTC | 29 Apr 24 14:10 UTC |
	|         | -p addons-457090                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-457090 ssh cat                                                                       | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:11 UTC | 29 Apr 24 14:11 UTC |
	|         | /opt/local-path-provisioner/pvc-d73e47b3-72c4-4752-8811-fa0e3b0dd658_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-457090 addons disable                                                                | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:11 UTC | 29 Apr 24 14:11 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-457090 addons                                                                        | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:11 UTC | 29 Apr 24 14:11 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-457090 addons                                                                        | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:11 UTC | 29 Apr 24 14:11 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:11 UTC | 29 Apr 24 14:11 UTC |
	|         | addons-457090                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:11 UTC | 29 Apr 24 14:11 UTC |
	|         | -p addons-457090                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:11 UTC | 29 Apr 24 14:12 UTC |
	|         | addons-457090                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-457090 ssh curl -s                                                                   | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:12 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-457090 ip                                                                            | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:14 UTC | 29 Apr 24 14:14 UTC |
	| addons  | addons-457090 addons disable                                                                | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:14 UTC | 29 Apr 24 14:14 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-457090 addons disable                                                                | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:14 UTC | 29 Apr 24 14:14 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-457090 addons                                                                        | addons-457090          | jenkins | v1.33.0 | 29 Apr 24 14:17 UTC | 29 Apr 24 14:17 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 14:06:51
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 14:06:51.726047 1903322 out.go:291] Setting OutFile to fd 1 ...
	I0429 14:06:51.726231 1903322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:06:51.726266 1903322 out.go:304] Setting ErrFile to fd 2...
	I0429 14:06:51.726284 1903322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:06:51.726656 1903322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-1897267/.minikube/bin
	I0429 14:06:51.727303 1903322 out.go:298] Setting JSON to false
	I0429 14:06:51.728883 1903322 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":35356,"bootTime":1714364256,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0429 14:06:51.729055 1903322 start.go:139] virtualization:  
	I0429 14:06:51.732025 1903322 out.go:177] * [addons-457090] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0429 14:06:51.734936 1903322 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 14:06:51.736883 1903322 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 14:06:51.735005 1903322 notify.go:220] Checking for updates...
	I0429 14:06:51.740423 1903322 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18771-1897267/kubeconfig
	I0429 14:06:51.742403 1903322 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-1897267/.minikube
	I0429 14:06:51.744292 1903322 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0429 14:06:51.746034 1903322 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 14:06:51.748274 1903322 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 14:06:51.768336 1903322 docker.go:122] docker version: linux-26.1.0:Docker Engine - Community
	I0429 14:06:51.768453 1903322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:06:51.832407 1903322 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-29 14:06:51.82274862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:06:51.832518 1903322 docker.go:295] overlay module found
	I0429 14:06:51.834657 1903322 out.go:177] * Using the docker driver based on user configuration
	I0429 14:06:51.836572 1903322 start.go:297] selected driver: docker
	I0429 14:06:51.836590 1903322 start.go:901] validating driver "docker" against <nil>
	I0429 14:06:51.836602 1903322 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 14:06:51.837285 1903322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:06:51.890240 1903322 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-29 14:06:51.88171619 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:06:51.890427 1903322 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 14:06:51.890648 1903322 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 14:06:51.893056 1903322 out.go:177] * Using Docker driver with root privileges
	I0429 14:06:51.894893 1903322 cni.go:84] Creating CNI manager for ""
	I0429 14:06:51.894913 1903322 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 14:06:51.894922 1903322 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 14:06:51.895005 1903322 start.go:340] cluster config:
	{Name:addons-457090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-457090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 14:06:51.897389 1903322 out.go:177] * Starting "addons-457090" primary control-plane node in "addons-457090" cluster
	I0429 14:06:51.899197 1903322 cache.go:121] Beginning downloading kic base image for docker with crio
	I0429 14:06:51.901180 1903322 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 14:06:51.903373 1903322 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 14:06:51.903504 1903322 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 14:06:51.903535 1903322 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4
	I0429 14:06:51.903545 1903322 cache.go:56] Caching tarball of preloaded images
	I0429 14:06:51.903612 1903322 preload.go:173] Found /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0429 14:06:51.903628 1903322 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 14:06:51.903968 1903322 profile.go:143] Saving config to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/config.json ...
	I0429 14:06:51.903995 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/config.json: {Name:mkedaaf14e5e59422442c581aac85e090158d002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:06:51.917232 1903322 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0429 14:06:51.917356 1903322 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory
	I0429 14:06:51.917382 1903322 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory, skipping pull
	I0429 14:06:51.917391 1903322 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in cache, skipping pull
	I0429 14:06:51.917404 1903322 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e as a tarball
	I0429 14:06:51.917414 1903322 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e from local cache
	I0429 14:07:08.704420 1903322 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e from cached tarball
	I0429 14:07:08.704458 1903322 cache.go:194] Successfully downloaded all kic artifacts
	I0429 14:07:08.704495 1903322 start.go:360] acquireMachinesLock for addons-457090: {Name:mk348a5f4a64954a7fbc72594b4980ed5c9598c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 14:07:08.704614 1903322 start.go:364] duration metric: took 95.187µs to acquireMachinesLock for "addons-457090"
	I0429 14:07:08.704655 1903322 start.go:93] Provisioning new machine with config: &{Name:addons-457090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-457090 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 14:07:08.704747 1903322 start.go:125] createHost starting for "" (driver="docker")
	I0429 14:07:08.707668 1903322 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0429 14:07:08.707908 1903322 start.go:159] libmachine.API.Create for "addons-457090" (driver="docker")
	I0429 14:07:08.707953 1903322 client.go:168] LocalClient.Create starting
	I0429 14:07:08.708066 1903322 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem
	I0429 14:07:09.072840 1903322 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem
	I0429 14:07:09.873399 1903322 cli_runner.go:164] Run: docker network inspect addons-457090 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0429 14:07:09.888636 1903322 cli_runner.go:211] docker network inspect addons-457090 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0429 14:07:09.888730 1903322 network_create.go:281] running [docker network inspect addons-457090] to gather additional debugging logs...
	I0429 14:07:09.888752 1903322 cli_runner.go:164] Run: docker network inspect addons-457090
	W0429 14:07:09.902648 1903322 cli_runner.go:211] docker network inspect addons-457090 returned with exit code 1
	I0429 14:07:09.902676 1903322 network_create.go:284] error running [docker network inspect addons-457090]: docker network inspect addons-457090: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-457090 not found
	I0429 14:07:09.902689 1903322 network_create.go:286] output of [docker network inspect addons-457090]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-457090 not found
	
	** /stderr **
	I0429 14:07:09.902807 1903322 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 14:07:09.918453 1903322 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002545150}
	I0429 14:07:09.918494 1903322 network_create.go:124] attempt to create docker network addons-457090 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0429 14:07:09.918549 1903322 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-457090 addons-457090
	I0429 14:07:09.974964 1903322 network_create.go:108] docker network addons-457090 192.168.49.0/24 created
	I0429 14:07:09.974995 1903322 kic.go:121] calculated static IP "192.168.49.2" for the "addons-457090" container
	I0429 14:07:09.975079 1903322 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0429 14:07:09.988436 1903322 cli_runner.go:164] Run: docker volume create addons-457090 --label name.minikube.sigs.k8s.io=addons-457090 --label created_by.minikube.sigs.k8s.io=true
	I0429 14:07:10.016750 1903322 oci.go:103] Successfully created a docker volume addons-457090
	I0429 14:07:10.016862 1903322 cli_runner.go:164] Run: docker run --rm --name addons-457090-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-457090 --entrypoint /usr/bin/test -v addons-457090:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0429 14:07:11.324368 1903322 cli_runner.go:217] Completed: docker run --rm --name addons-457090-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-457090 --entrypoint /usr/bin/test -v addons-457090:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib: (1.307466693s)
	I0429 14:07:11.324402 1903322 oci.go:107] Successfully prepared a docker volume addons-457090
	I0429 14:07:11.324446 1903322 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 14:07:11.324468 1903322 kic.go:194] Starting extracting preloaded images to volume ...
	I0429 14:07:11.324544 1903322 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-457090:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0429 14:07:15.432945 1903322 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-457090:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.108361242s)
	I0429 14:07:15.432977 1903322 kic.go:203] duration metric: took 4.108504822s to extract preloaded images to volume ...
	W0429 14:07:15.433112 1903322 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0429 14:07:15.433234 1903322 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0429 14:07:15.480396 1903322 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-457090 --name addons-457090 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-457090 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-457090 --network addons-457090 --ip 192.168.49.2 --volume addons-457090:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e
	I0429 14:07:15.826833 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Running}}
	I0429 14:07:15.847470 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:15.869515 1903322 cli_runner.go:164] Run: docker exec addons-457090 stat /var/lib/dpkg/alternatives/iptables
	I0429 14:07:15.935703 1903322 oci.go:144] the created container "addons-457090" has a running status.
	I0429 14:07:15.935738 1903322 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa...
	I0429 14:07:16.511966 1903322 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0429 14:07:16.544784 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:16.562221 1903322 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0429 14:07:16.562245 1903322 kic_runner.go:114] Args: [docker exec --privileged addons-457090 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0429 14:07:16.624195 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:16.643295 1903322 machine.go:94] provisionDockerMachine start ...
	I0429 14:07:16.643401 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:16.663210 1903322 main.go:141] libmachine: Using SSH client type: native
	I0429 14:07:16.663482 1903322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35042 <nil> <nil>}
	I0429 14:07:16.663490 1903322 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 14:07:16.791991 1903322 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-457090
	
	I0429 14:07:16.792059 1903322 ubuntu.go:169] provisioning hostname "addons-457090"
	I0429 14:07:16.792152 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:16.814630 1903322 main.go:141] libmachine: Using SSH client type: native
	I0429 14:07:16.814958 1903322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35042 <nil> <nil>}
	I0429 14:07:16.814975 1903322 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-457090 && echo "addons-457090" | sudo tee /etc/hostname
	I0429 14:07:16.961204 1903322 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-457090
	
	I0429 14:07:16.961333 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:16.977684 1903322 main.go:141] libmachine: Using SSH client type: native
	I0429 14:07:16.977930 1903322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35042 <nil> <nil>}
	I0429 14:07:16.977952 1903322 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-457090' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-457090/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-457090' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 14:07:17.104776 1903322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 14:07:17.104816 1903322 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18771-1897267/.minikube CaCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18771-1897267/.minikube}
	I0429 14:07:17.104843 1903322 ubuntu.go:177] setting up certificates
	I0429 14:07:17.104852 1903322 provision.go:84] configureAuth start
	I0429 14:07:17.104922 1903322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-457090
	I0429 14:07:17.124766 1903322 provision.go:143] copyHostCerts
	I0429 14:07:17.124852 1903322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem (1078 bytes)
	I0429 14:07:17.124980 1903322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem (1123 bytes)
	I0429 14:07:17.125049 1903322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem (1679 bytes)
	I0429 14:07:17.125105 1903322 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem org=jenkins.addons-457090 san=[127.0.0.1 192.168.49.2 addons-457090 localhost minikube]
	I0429 14:07:17.501573 1903322 provision.go:177] copyRemoteCerts
	I0429 14:07:17.501655 1903322 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 14:07:17.501707 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:17.519353 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:17.609688 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 14:07:17.634692 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 14:07:17.658205 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 14:07:17.681793 1903322 provision.go:87] duration metric: took 576.926595ms to configureAuth
	I0429 14:07:17.681824 1903322 ubuntu.go:193] setting minikube options for container-runtime
	I0429 14:07:17.682008 1903322 config.go:182] Loaded profile config "addons-457090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:07:17.682114 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:17.697657 1903322 main.go:141] libmachine: Using SSH client type: native
	I0429 14:07:17.697906 1903322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35042 <nil> <nil>}
	I0429 14:07:17.697926 1903322 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 14:07:17.931541 1903322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 14:07:17.931568 1903322 machine.go:97] duration metric: took 1.288248315s to provisionDockerMachine
	I0429 14:07:17.931578 1903322 client.go:171] duration metric: took 9.223615798s to LocalClient.Create
	I0429 14:07:17.931592 1903322 start.go:167] duration metric: took 9.223684951s to libmachine.API.Create "addons-457090"
	I0429 14:07:17.931599 1903322 start.go:293] postStartSetup for "addons-457090" (driver="docker")
	I0429 14:07:17.931610 1903322 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 14:07:17.931671 1903322 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 14:07:17.931718 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:17.948582 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:18.039110 1903322 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 14:07:18.042932 1903322 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0429 14:07:18.042990 1903322 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0429 14:07:18.043003 1903322 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0429 14:07:18.043018 1903322 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0429 14:07:18.043033 1903322 filesync.go:126] Scanning /home/jenkins/minikube-integration/18771-1897267/.minikube/addons for local assets ...
	I0429 14:07:18.043116 1903322 filesync.go:126] Scanning /home/jenkins/minikube-integration/18771-1897267/.minikube/files for local assets ...
	I0429 14:07:18.043155 1903322 start.go:296] duration metric: took 111.549816ms for postStartSetup
	I0429 14:07:18.043524 1903322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-457090
	I0429 14:07:18.059907 1903322 profile.go:143] Saving config to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/config.json ...
	I0429 14:07:18.060217 1903322 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 14:07:18.060293 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:18.078692 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:18.165599 1903322 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 14:07:18.169873 1903322 start.go:128] duration metric: took 9.465111051s to createHost
	I0429 14:07:18.169895 1903322 start.go:83] releasing machines lock for "addons-457090", held for 9.465267489s
	I0429 14:07:18.169962 1903322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-457090
	I0429 14:07:18.185397 1903322 ssh_runner.go:195] Run: cat /version.json
	I0429 14:07:18.185448 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:18.185478 1903322 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 14:07:18.185530 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:18.206866 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:18.208166 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:18.292061 1903322 ssh_runner.go:195] Run: systemctl --version
	I0429 14:07:18.296927 1903322 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 14:07:18.460392 1903322 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 14:07:18.464530 1903322 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 14:07:18.486628 1903322 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0429 14:07:18.486704 1903322 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 14:07:18.515812 1903322 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0429 14:07:18.515834 1903322 start.go:494] detecting cgroup driver to use...
	I0429 14:07:18.515864 1903322 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0429 14:07:18.515933 1903322 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 14:07:18.532069 1903322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 14:07:18.545326 1903322 docker.go:217] disabling cri-docker service (if available) ...
	I0429 14:07:18.545391 1903322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 14:07:18.560555 1903322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 14:07:18.576103 1903322 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 14:07:18.677657 1903322 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 14:07:18.782118 1903322 docker.go:233] disabling docker service ...
	I0429 14:07:18.782185 1903322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 14:07:18.802358 1903322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 14:07:18.814464 1903322 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 14:07:18.899110 1903322 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 14:07:18.996822 1903322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 14:07:19.010748 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 14:07:19.027568 1903322 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 14:07:19.027637 1903322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:07:19.038137 1903322 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 14:07:19.038245 1903322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:07:19.047772 1903322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:07:19.057458 1903322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:07:19.067392 1903322 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 14:07:19.076463 1903322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:07:19.085874 1903322 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:07:19.101173 1903322 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:07:19.110962 1903322 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 14:07:19.119965 1903322 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 14:07:19.128360 1903322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 14:07:19.216325 1903322 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 14:07:19.332002 1903322 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 14:07:19.332145 1903322 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 14:07:19.335814 1903322 start.go:562] Will wait 60s for crictl version
	I0429 14:07:19.335880 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:07:19.339443 1903322 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 14:07:19.381985 1903322 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0429 14:07:19.382095 1903322 ssh_runner.go:195] Run: crio --version
	I0429 14:07:19.421468 1903322 ssh_runner.go:195] Run: crio --version
	I0429 14:07:19.471680 1903322 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.24.6 ...
	I0429 14:07:19.473838 1903322 cli_runner.go:164] Run: docker network inspect addons-457090 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 14:07:19.489193 1903322 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0429 14:07:19.492958 1903322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 14:07:19.504081 1903322 kubeadm.go:877] updating cluster {Name:addons-457090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-457090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 14:07:19.504207 1903322 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 14:07:19.504268 1903322 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 14:07:19.580861 1903322 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 14:07:19.580883 1903322 crio.go:433] Images already preloaded, skipping extraction
	I0429 14:07:19.580937 1903322 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 14:07:19.620228 1903322 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 14:07:19.620252 1903322 cache_images.go:84] Images are preloaded, skipping loading
	I0429 14:07:19.620261 1903322 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.0 crio true true} ...
	I0429 14:07:19.620354 1903322 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-457090 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-457090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 14:07:19.620443 1903322 ssh_runner.go:195] Run: crio config
	I0429 14:07:19.667700 1903322 cni.go:84] Creating CNI manager for ""
	I0429 14:07:19.667724 1903322 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 14:07:19.667741 1903322 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 14:07:19.667763 1903322 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-457090 NodeName:addons-457090 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 14:07:19.667916 1903322 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-457090"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 14:07:19.667985 1903322 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 14:07:19.676744 1903322 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 14:07:19.676811 1903322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 14:07:19.685267 1903322 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0429 14:07:19.702664 1903322 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 14:07:19.720185 1903322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0429 14:07:19.737727 1903322 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0429 14:07:19.740977 1903322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 14:07:19.751643 1903322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 14:07:19.840633 1903322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 14:07:19.854203 1903322 certs.go:68] Setting up /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090 for IP: 192.168.49.2
	I0429 14:07:19.854276 1903322 certs.go:194] generating shared ca certs ...
	I0429 14:07:19.854305 1903322 certs.go:226] acquiring lock for ca certs: {Name:mk012c6865f9f1625b7bfd5d0280b6707793520e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:19.854462 1903322 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key
	I0429 14:07:20.329665 1903322 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt ...
	I0429 14:07:20.329703 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt: {Name:mka2019fbfe59146662f34b9c21b1924ee4d4781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:20.329951 1903322 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key ...
	I0429 14:07:20.329968 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key: {Name:mk0778f4bf44036cace3ccb43916ea03bd13d929 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:20.330063 1903322 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key
	I0429 14:07:20.761699 1903322 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.crt ...
	I0429 14:07:20.761736 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.crt: {Name:mk636a2913a13527b6a821d0a19482cdb8456da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:20.761932 1903322 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key ...
	I0429 14:07:20.761946 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key: {Name:mk8a61b3fea9bc6c8b23591c0561875cabea7997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:20.762032 1903322 certs.go:256] generating profile certs ...
	I0429 14:07:20.762098 1903322 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.key
	I0429 14:07:20.762119 1903322 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt with IP's: []
	I0429 14:07:21.023439 1903322 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt ...
	I0429 14:07:21.023471 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: {Name:mk52bba284d9b76dabfc3f7a15a199308f6ebebe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:21.023663 1903322 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.key ...
	I0429 14:07:21.023676 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.key: {Name:mkb948f73d775d0f71a7f77dd796aca72d0a0e47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:21.023763 1903322 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.key.0cec27d5
	I0429 14:07:21.023785 1903322 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.crt.0cec27d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0429 14:07:21.495009 1903322 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.crt.0cec27d5 ...
	I0429 14:07:21.495043 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.crt.0cec27d5: {Name:mkbeccdc1710362174910617f4bca97d1c55e709 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:21.495251 1903322 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.key.0cec27d5 ...
	I0429 14:07:21.495270 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.key.0cec27d5: {Name:mkbd1fb5674880c1d08aa291a724907ba2c49844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:21.495362 1903322 certs.go:381] copying /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.crt.0cec27d5 -> /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.crt
	I0429 14:07:21.495449 1903322 certs.go:385] copying /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.key.0cec27d5 -> /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.key
	I0429 14:07:21.495504 1903322 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/proxy-client.key
	I0429 14:07:21.495526 1903322 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/proxy-client.crt with IP's: []
	I0429 14:07:21.940445 1903322 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/proxy-client.crt ...
	I0429 14:07:21.940479 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/proxy-client.crt: {Name:mk7cfbd8e5ee4155ae9c21eb6f1f17142ba58dac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:21.940692 1903322 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/proxy-client.key ...
	I0429 14:07:21.940708 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/proxy-client.key: {Name:mk1e89081eb29f83f8c3a45d37d3ea69612ced43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:21.940934 1903322 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 14:07:21.940980 1903322 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem (1078 bytes)
	I0429 14:07:21.941005 1903322 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem (1123 bytes)
	I0429 14:07:21.941032 1903322 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem (1679 bytes)
	I0429 14:07:21.941687 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 14:07:21.967017 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 14:07:21.991865 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 14:07:22.020990 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 14:07:22.046811 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0429 14:07:22.071941 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 14:07:22.096452 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 14:07:22.120823 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 14:07:22.145451 1903322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 14:07:22.172925 1903322 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 14:07:22.193804 1903322 ssh_runner.go:195] Run: openssl version
	I0429 14:07:22.202234 1903322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 14:07:22.212459 1903322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:07:22.216222 1903322 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 14:07 /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:07:22.216395 1903322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:07:22.223668 1903322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 14:07:22.238455 1903322 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 14:07:22.242063 1903322 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 14:07:22.242136 1903322 kubeadm.go:391] StartCluster: {Name:addons-457090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-457090 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 14:07:22.242226 1903322 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 14:07:22.242297 1903322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 14:07:22.285318 1903322 cri.go:89] found id: ""
	I0429 14:07:22.285386 1903322 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 14:07:22.295888 1903322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 14:07:22.304970 1903322 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0429 14:07:22.305055 1903322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 14:07:22.314032 1903322 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 14:07:22.314052 1903322 kubeadm.go:156] found existing configuration files:
	
	I0429 14:07:22.314122 1903322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 14:07:22.323116 1903322 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 14:07:22.323202 1903322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 14:07:22.331826 1903322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 14:07:22.340979 1903322 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 14:07:22.341072 1903322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 14:07:22.349538 1903322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 14:07:22.358866 1903322 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 14:07:22.358945 1903322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 14:07:22.367425 1903322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 14:07:22.376471 1903322 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 14:07:22.376533 1903322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 14:07:22.384850 1903322 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0429 14:07:22.433339 1903322 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 14:07:22.433632 1903322 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 14:07:22.471872 1903322 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0429 14:07:22.472005 1903322 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1058-aws
	I0429 14:07:22.472072 1903322 kubeadm.go:309] OS: Linux
	I0429 14:07:22.472137 1903322 kubeadm.go:309] CGROUPS_CPU: enabled
	I0429 14:07:22.472211 1903322 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0429 14:07:22.472285 1903322 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0429 14:07:22.472359 1903322 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0429 14:07:22.472431 1903322 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0429 14:07:22.472496 1903322 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0429 14:07:22.472571 1903322 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0429 14:07:22.472639 1903322 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0429 14:07:22.472730 1903322 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0429 14:07:22.538884 1903322 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 14:07:22.539047 1903322 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 14:07:22.539168 1903322 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 14:07:22.785018 1903322 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 14:07:22.788710 1903322 out.go:204]   - Generating certificates and keys ...
	I0429 14:07:22.788826 1903322 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 14:07:22.788909 1903322 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 14:07:24.117498 1903322 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 14:07:24.597909 1903322 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 14:07:24.824332 1903322 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 14:07:25.177026 1903322 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 14:07:25.420914 1903322 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 14:07:25.421062 1903322 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-457090 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0429 14:07:25.608470 1903322 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 14:07:25.608798 1903322 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-457090 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0429 14:07:25.870697 1903322 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 14:07:26.389423 1903322 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 14:07:26.565053 1903322 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 14:07:26.565333 1903322 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 14:07:26.742781 1903322 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 14:07:27.135068 1903322 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 14:07:27.754457 1903322 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 14:07:28.306023 1903322 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 14:07:28.837456 1903322 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 14:07:28.838219 1903322 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 14:07:28.843076 1903322 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 14:07:28.845114 1903322 out.go:204]   - Booting up control plane ...
	I0429 14:07:28.845213 1903322 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 14:07:28.845289 1903322 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 14:07:28.846030 1903322 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 14:07:28.867187 1903322 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 14:07:28.868119 1903322 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 14:07:28.868328 1903322 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 14:07:28.966286 1903322 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 14:07:28.966373 1903322 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 14:07:29.967119 1903322 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.000919123s
	I0429 14:07:29.967239 1903322 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 14:07:36.469977 1903322 kubeadm.go:309] [api-check] The API server is healthy after 6.502849708s
	I0429 14:07:36.490983 1903322 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 14:07:36.505592 1903322 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 14:07:36.530007 1903322 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 14:07:36.530204 1903322 kubeadm.go:309] [mark-control-plane] Marking the node addons-457090 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 14:07:36.540911 1903322 kubeadm.go:309] [bootstrap-token] Using token: 299kq3.syi4mwk6phg59drt
	I0429 14:07:36.542844 1903322 out.go:204]   - Configuring RBAC rules ...
	I0429 14:07:36.542971 1903322 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 14:07:36.547511 1903322 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 14:07:36.556370 1903322 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 14:07:36.560078 1903322 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 14:07:36.564151 1903322 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 14:07:36.568408 1903322 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 14:07:36.877214 1903322 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 14:07:37.320720 1903322 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 14:07:37.876249 1903322 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 14:07:37.877488 1903322 kubeadm.go:309] 
	I0429 14:07:37.877558 1903322 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 14:07:37.877569 1903322 kubeadm.go:309] 
	I0429 14:07:37.877651 1903322 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 14:07:37.877660 1903322 kubeadm.go:309] 
	I0429 14:07:37.877689 1903322 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 14:07:37.877758 1903322 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 14:07:37.877811 1903322 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 14:07:37.877821 1903322 kubeadm.go:309] 
	I0429 14:07:37.877874 1903322 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 14:07:37.877882 1903322 kubeadm.go:309] 
	I0429 14:07:37.877928 1903322 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 14:07:37.877936 1903322 kubeadm.go:309] 
	I0429 14:07:37.877986 1903322 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 14:07:37.878065 1903322 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 14:07:37.878136 1903322 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 14:07:37.878145 1903322 kubeadm.go:309] 
	I0429 14:07:37.878226 1903322 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 14:07:37.878303 1903322 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 14:07:37.878311 1903322 kubeadm.go:309] 
	I0429 14:07:37.878392 1903322 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 299kq3.syi4mwk6phg59drt \
	I0429 14:07:37.878495 1903322 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:21d9b8764194e6fe6c1583ba013e3f02163c5cceb0b910b9847eaf47c168f2e3 \
	I0429 14:07:37.878517 1903322 kubeadm.go:309] 	--control-plane 
	I0429 14:07:37.878526 1903322 kubeadm.go:309] 
	I0429 14:07:37.878608 1903322 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 14:07:37.878630 1903322 kubeadm.go:309] 
	I0429 14:07:37.878711 1903322 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 299kq3.syi4mwk6phg59drt \
	I0429 14:07:37.878813 1903322 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:21d9b8764194e6fe6c1583ba013e3f02163c5cceb0b910b9847eaf47c168f2e3 
	I0429 14:07:37.882282 1903322 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1058-aws\n", err: exit status 1
	I0429 14:07:37.882397 1903322 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 14:07:37.882417 1903322 cni.go:84] Creating CNI manager for ""
	I0429 14:07:37.882425 1903322 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 14:07:37.885486 1903322 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 14:07:37.887217 1903322 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 14:07:37.890984 1903322 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 14:07:37.891003 1903322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 14:07:37.909041 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 14:07:38.177444 1903322 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 14:07:38.177576 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:38.177669 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-457090 minikube.k8s.io/updated_at=2024_04_29T14_07_38_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=94a01df0d4b48636d4af3d06a53be687e06c0844 minikube.k8s.io/name=addons-457090 minikube.k8s.io/primary=true
	I0429 14:07:38.321955 1903322 ops.go:34] apiserver oom_adj: -16
	I0429 14:07:38.322056 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:38.822612 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:39.322187 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:39.822896 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:40.322867 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:40.822668 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:41.322497 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:41.822935 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:42.322739 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:42.822966 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:43.322402 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:43.823123 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:44.322873 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:44.823060 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:45.323239 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:45.822996 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:46.322647 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:46.823086 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:47.322514 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:47.822688 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:48.322988 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:48.822385 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:49.323051 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:49.823158 1903322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 14:07:49.910592 1903322 kubeadm.go:1107] duration metric: took 11.733062181s to wait for elevateKubeSystemPrivileges
	W0429 14:07:49.910628 1903322 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 14:07:49.910638 1903322 kubeadm.go:393] duration metric: took 27.668520713s to StartCluster
	I0429 14:07:49.910653 1903322 settings.go:142] acquiring lock: {Name:mkd5b42c61905151cf6a97c69329c4a81e851953 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:49.910769 1903322 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18771-1897267/kubeconfig
	I0429 14:07:49.911211 1903322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/kubeconfig: {Name:mkd7a824e40528d6a3c0c02051ff0aa2d4aebaa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:07:49.911410 1903322 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 14:07:49.913684 1903322 out.go:177] * Verifying Kubernetes components...
	I0429 14:07:49.911533 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 14:07:49.911693 1903322 config.go:182] Loaded profile config "addons-457090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:07:49.911703 1903322 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0429 14:07:49.915370 1903322 addons.go:69] Setting yakd=true in profile "addons-457090"
	I0429 14:07:49.915397 1903322 addons.go:234] Setting addon yakd=true in "addons-457090"
	I0429 14:07:49.915397 1903322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 14:07:49.915427 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:49.915481 1903322 addons.go:69] Setting ingress-dns=true in profile "addons-457090"
	I0429 14:07:49.915501 1903322 addons.go:234] Setting addon ingress-dns=true in "addons-457090"
	I0429 14:07:49.915531 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:49.915883 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.915904 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.916247 1903322 addons.go:69] Setting inspektor-gadget=true in profile "addons-457090"
	I0429 14:07:49.916271 1903322 addons.go:234] Setting addon inspektor-gadget=true in "addons-457090"
	I0429 14:07:49.916294 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:49.916692 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.916953 1903322 addons.go:69] Setting cloud-spanner=true in profile "addons-457090"
	I0429 14:07:49.916975 1903322 addons.go:234] Setting addon cloud-spanner=true in "addons-457090"
	I0429 14:07:49.917000 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:49.917353 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.919671 1903322 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-457090"
	I0429 14:07:49.919739 1903322 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-457090"
	I0429 14:07:49.919771 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:49.920170 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.929934 1903322 addons.go:69] Setting metrics-server=true in profile "addons-457090"
	I0429 14:07:49.930028 1903322 addons.go:234] Setting addon metrics-server=true in "addons-457090"
	I0429 14:07:49.930095 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:49.930607 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.930792 1903322 addons.go:69] Setting default-storageclass=true in profile "addons-457090"
	I0429 14:07:49.930841 1903322 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-457090"
	I0429 14:07:49.933057 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.949132 1903322 addons.go:69] Setting gcp-auth=true in profile "addons-457090"
	I0429 14:07:49.949241 1903322 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-457090"
	I0429 14:07:49.949264 1903322 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-457090"
	I0429 14:07:49.949301 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:49.949751 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.949952 1903322 mustload.go:65] Loading cluster: addons-457090
	I0429 14:07:49.950107 1903322 config.go:182] Loaded profile config "addons-457090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:07:49.950311 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.952089 1903322 addons.go:69] Setting registry=true in profile "addons-457090"
	I0429 14:07:49.952122 1903322 addons.go:234] Setting addon registry=true in "addons-457090"
	I0429 14:07:49.952161 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:49.952552 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.957809 1903322 addons.go:69] Setting storage-provisioner=true in profile "addons-457090"
	I0429 14:07:49.957909 1903322 addons.go:234] Setting addon storage-provisioner=true in "addons-457090"
	I0429 14:07:49.957976 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:49.958492 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:49.960511 1903322 addons.go:69] Setting ingress=true in profile "addons-457090"
	I0429 14:07:49.971551 1903322 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-457090"
	I0429 14:07:49.971559 1903322 addons.go:69] Setting volumesnapshots=true in profile "addons-457090"
	I0429 14:07:50.015453 1903322 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0429 14:07:50.017156 1903322 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0429 14:07:50.017185 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0429 14:07:50.017267 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.028914 1903322 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0429 14:07:50.030550 1903322 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 14:07:50.030570 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0429 14:07:50.030637 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.029818 1903322 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-457090"
	I0429 14:07:50.029847 1903322 addons.go:234] Setting addon ingress=true in "addons-457090"
	I0429 14:07:50.029882 1903322 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0429 14:07:50.029898 1903322 addons.go:234] Setting addon volumesnapshots=true in "addons-457090"
	I0429 14:07:50.029906 1903322 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0429 14:07:50.049183 1903322 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0429 14:07:50.049206 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0429 14:07:50.049270 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.051869 1903322 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0429 14:07:50.053709 1903322 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0429 14:07:50.057339 1903322 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0429 14:07:50.064856 1903322 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0429 14:07:50.062898 1903322 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0429 14:07:50.062908 1903322 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0429 14:07:50.063236 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:50.063270 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:50.063306 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:50.072456 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:50.076536 1903322 addons.go:234] Setting addon default-storageclass=true in "addons-457090"
	I0429 14:07:50.077721 1903322 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0429 14:07:50.078221 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:50.092648 1903322 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0429 14:07:50.095524 1903322 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 14:07:50.095548 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 14:07:50.095612 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.092925 1903322 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0429 14:07:50.106440 1903322 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0429 14:07:50.121572 1903322 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0429 14:07:50.105379 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:50.093669 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0429 14:07:50.093707 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:50.093056 1903322 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 14:07:50.128458 1903322 out.go:177]   - Using image docker.io/registry:2.8.3
	I0429 14:07:50.129012 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:50.129041 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.129053 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0429 14:07:50.133540 1903322 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0429 14:07:50.144894 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.149014 1903322 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0429 14:07:50.149161 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0429 14:07:50.149224 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.149058 1903322 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0429 14:07:50.168943 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0429 14:07:50.169018 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.149066 1903322 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 14:07:50.190663 1903322 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 14:07:50.190684 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 14:07:50.190748 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.191017 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.228515 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.229165 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.274594 1903322 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-457090"
	I0429 14:07:50.274637 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:50.275185 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:07:50.297014 1903322 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0429 14:07:50.309517 1903322 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0429 14:07:50.309585 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0429 14:07:50.309691 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.319701 1903322 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 14:07:50.319725 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 14:07:50.319789 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.340723 1903322 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0429 14:07:50.342647 1903322 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0429 14:07:50.341253 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.339300 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.306863 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.355408 1903322 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0429 14:07:50.358305 1903322 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 14:07:50.358328 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0429 14:07:50.358393 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.396934 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.397286 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.400468 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.402924 1903322 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0429 14:07:50.404772 1903322 out.go:177]   - Using image docker.io/busybox:stable
	I0429 14:07:50.412461 1903322 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 14:07:50.412483 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0429 14:07:50.412544 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:50.445005 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.446025 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.452786 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.464977 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:50.540903 1903322 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0429 14:07:50.540930 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0429 14:07:50.573298 1903322 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0429 14:07:50.573324 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0429 14:07:50.624222 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0429 14:07:50.647268 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 14:07:50.691721 1903322 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0429 14:07:50.691794 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0429 14:07:50.704277 1903322 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 14:07:50.704348 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0429 14:07:50.736962 1903322 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0429 14:07:50.737031 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0429 14:07:50.758623 1903322 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0429 14:07:50.758694 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0429 14:07:50.766732 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 14:07:50.780016 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 14:07:50.784356 1903322 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0429 14:07:50.784427 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0429 14:07:50.803286 1903322 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 14:07:50.803355 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 14:07:50.853024 1903322 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0429 14:07:50.853095 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0429 14:07:50.855135 1903322 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0429 14:07:50.855198 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0429 14:07:50.877043 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 14:07:50.879541 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 14:07:50.890989 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 14:07:50.916296 1903322 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0429 14:07:50.916367 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0429 14:07:50.929134 1903322 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.013709315s)
	I0429 14:07:50.929283 1903322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 14:07:50.929163 1903322 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.012327317s)
	I0429 14:07:50.929522 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 14:07:50.933388 1903322 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 14:07:50.933453 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 14:07:50.933661 1903322 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0429 14:07:50.933695 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0429 14:07:50.959215 1903322 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0429 14:07:50.959287 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0429 14:07:51.047320 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0429 14:07:51.066958 1903322 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0429 14:07:51.067031 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0429 14:07:51.125375 1903322 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0429 14:07:51.125450 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0429 14:07:51.138951 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 14:07:51.150957 1903322 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0429 14:07:51.151024 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0429 14:07:51.156115 1903322 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0429 14:07:51.156193 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0429 14:07:51.245351 1903322 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0429 14:07:51.245427 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0429 14:07:51.327769 1903322 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0429 14:07:51.327896 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0429 14:07:51.359887 1903322 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0429 14:07:51.359957 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0429 14:07:51.372063 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0429 14:07:51.445909 1903322 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0429 14:07:51.445936 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0429 14:07:51.507189 1903322 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0429 14:07:51.507216 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0429 14:07:51.511967 1903322 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0429 14:07:51.512002 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0429 14:07:51.561847 1903322 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 14:07:51.561878 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0429 14:07:51.601366 1903322 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0429 14:07:51.601400 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0429 14:07:51.610605 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 14:07:51.648038 1903322 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 14:07:51.648072 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0429 14:07:51.665161 1903322 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0429 14:07:51.665191 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0429 14:07:51.741997 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 14:07:51.744400 1903322 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0429 14:07:51.744422 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0429 14:07:51.806488 1903322 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0429 14:07:51.806510 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0429 14:07:51.899739 1903322 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 14:07:51.899766 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0429 14:07:51.972500 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 14:07:54.197102 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.5728438s)
	I0429 14:07:54.335169 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.687865488s)
	I0429 14:07:54.335255 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.568457053s)
	I0429 14:07:54.628080 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.847989845s)
	I0429 14:07:54.824025 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.946906153s)
	I0429 14:07:55.993028 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.113403778s)
	I0429 14:07:55.993130 1903322 addons.go:470] Verifying addon ingress=true in "addons-457090"
	I0429 14:07:55.995630 1903322 out.go:177] * Verifying ingress addon...
	I0429 14:07:55.993400 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.102332723s)
	I0429 14:07:55.993520 1903322 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.063959433s)
	I0429 14:07:55.993536 1903322 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.06421265s)
	I0429 14:07:55.993568 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.946177269s)
	I0429 14:07:55.993644 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.85462323s)
	I0429 14:07:55.993694 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.621596981s)
	I0429 14:07:55.993766 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.383134718s)
	I0429 14:07:55.994004 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.251824062s)
	I0429 14:07:55.996100 1903322 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0429 14:07:55.997092 1903322 node_ready.go:35] waiting up to 6m0s for node "addons-457090" to be "Ready" ...
	I0429 14:07:55.997444 1903322 addons.go:470] Verifying addon registry=true in "addons-457090"
	I0429 14:07:55.997454 1903322 addons.go:470] Verifying addon metrics-server=true in "addons-457090"
	W0429 14:07:55.997489 1903322 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 14:07:56.000393 1903322 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0429 14:07:56.002674 1903322 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-457090 service yakd-dashboard -n yakd-dashboard
	
	I0429 14:07:56.003031 1903322 retry.go:31] will retry after 212.063025ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 14:07:56.007199 1903322 out.go:177] * Verifying registry addon...
	I0429 14:07:56.010788 1903322 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0429 14:07:56.028788 1903322 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0429 14:07:56.028865 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:07:56.036513 1903322 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0429 14:07:56.036543 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:07:56.217915 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 14:07:56.561541 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:07:56.570893 1903322 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-457090" context rescaled to 1 replicas
	I0429 14:07:56.584979 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:07:56.634986 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.662430579s)
	I0429 14:07:56.635022 1903322 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-457090"
	I0429 14:07:56.637647 1903322 out.go:177] * Verifying csi-hostpath-driver addon...
	I0429 14:07:56.640760 1903322 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0429 14:07:56.701452 1903322 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0429 14:07:56.701483 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:07:57.043821 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:07:57.057102 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:07:57.146112 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:07:57.510995 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:07:57.526434 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:07:57.645629 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:07:58.008038 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:07:58.009705 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:07:58.017299 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:07:58.148147 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:07:58.512827 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:07:58.520136 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:07:58.645583 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:07:59.008276 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:07:59.022101 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:07:59.150329 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:07:59.366143 1903322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.148138012s)
	I0429 14:07:59.523490 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:07:59.525059 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:07:59.651360 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:07:59.798381 1903322 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0429 14:07:59.798464 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:07:59.816011 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:07:59.949419 1903322 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0429 14:07:59.973268 1903322 addons.go:234] Setting addon gcp-auth=true in "addons-457090"
	I0429 14:07:59.973325 1903322 host.go:66] Checking if "addons-457090" exists ...
	I0429 14:07:59.973810 1903322 cli_runner.go:164] Run: docker container inspect addons-457090 --format={{.State.Status}}
	I0429 14:08:00.011262 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:00.011829 1903322 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0429 14:08:00.011883 1903322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-457090
	I0429 14:08:00.027129 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:00.061153 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:00.072494 1903322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35042 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/addons-457090/id_rsa Username:docker}
	I0429 14:08:00.154502 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:00.241090 1903322 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0429 14:08:00.242979 1903322 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0429 14:08:00.245083 1903322 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0429 14:08:00.245119 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0429 14:08:00.278775 1903322 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0429 14:08:00.278812 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0429 14:08:00.322424 1903322 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 14:08:00.322459 1903322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0429 14:08:00.371411 1903322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 14:08:00.515857 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:00.535594 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:00.648551 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:01.031670 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:01.032384 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:01.130715 1903322 addons.go:470] Verifying addon gcp-auth=true in "addons-457090"
	I0429 14:08:01.132557 1903322 out.go:177] * Verifying gcp-auth addon...
	I0429 14:08:01.135413 1903322 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0429 14:08:01.138889 1903322 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0429 14:08:01.138958 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:01.146750 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:01.508559 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:01.515198 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:01.640962 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:01.646725 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:02.014104 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:02.014936 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:02.015821 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:02.143196 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:02.146460 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:02.509237 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:02.515537 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:02.639560 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:02.645004 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:03.008622 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:03.015558 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:03.139620 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:03.145802 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:03.506816 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:03.515751 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:03.638837 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:03.645428 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:04.008518 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:04.015659 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:04.139342 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:04.145258 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:04.507480 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:04.509357 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:04.515205 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:04.639700 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:04.644503 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:05.012192 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:05.015813 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:05.138788 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:05.145520 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:05.507237 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:05.515609 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:05.638611 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:05.645060 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:06.014050 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:06.016144 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:06.139306 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:06.145407 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:06.507694 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:06.515899 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:06.639255 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:06.645493 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:07.004576 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:07.007626 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:07.014335 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:07.138795 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:07.145559 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:07.507403 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:07.515271 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:07.640123 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:07.650958 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:08.007893 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:08.014798 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:08.138752 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:08.144537 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:08.507862 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:08.514281 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:08.639249 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:08.644804 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:09.004787 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:09.007580 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:09.015695 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:09.139923 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:09.145811 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:09.507980 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:09.514570 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:09.639988 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:09.645813 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:10.018178 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:10.019237 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:10.139448 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:10.146165 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:10.506698 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:10.515870 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:10.639113 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:10.644405 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:11.005357 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:11.008494 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:11.015166 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:11.139191 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:11.145140 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:11.507443 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:11.515233 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:11.639341 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:11.645089 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:12.009004 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:12.015369 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:12.138806 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:12.144633 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:12.507100 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:12.514840 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:12.638754 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:12.645112 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:13.007413 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:13.015326 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:13.139226 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:13.145274 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:13.505418 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:13.507384 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:13.515108 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:13.639279 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:13.645196 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:14.007725 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:14.014710 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:14.138648 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:14.145012 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:14.507108 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:14.514699 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:14.638913 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:14.645356 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:15.010342 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:15.015482 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:15.139760 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:15.144821 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:15.507317 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:15.515017 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:15.639171 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:15.645879 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:16.008032 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:16.009412 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:16.015087 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:16.139478 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:16.145315 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:16.506567 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:16.515398 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:16.639288 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:16.644554 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:17.007209 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:17.015014 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:17.139314 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:17.145150 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:17.507411 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:17.515170 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:17.639160 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:17.645708 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:18.007797 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:18.014691 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:18.139583 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:18.145594 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:18.509943 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:18.510734 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:18.514517 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:18.638876 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:18.645258 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:19.008023 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:19.014984 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:19.138888 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:19.144828 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:19.507617 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:19.514630 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:19.639473 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:19.645131 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:20.008532 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:20.015915 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:20.139344 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:20.145052 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:20.506958 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:20.514631 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:20.638851 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:20.644533 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:21.005060 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:21.007906 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:21.015149 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:21.138816 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:21.145375 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:21.506952 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:21.515123 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:21.641040 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:21.644565 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:22.007730 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:22.014576 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:22.139605 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:22.144783 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:22.510010 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:22.519253 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:22.640131 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:22.658432 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:23.007802 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:23.014473 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:23.138889 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:23.146064 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:23.506395 1903322 node_ready.go:53] node "addons-457090" has status "Ready":"False"
	I0429 14:08:23.508040 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:23.515822 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:23.639220 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:23.644886 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:24.013314 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:24.016747 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:24.139568 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:24.145203 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:24.507970 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:24.514718 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:24.638394 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:24.644542 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:25.007444 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:25.015770 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:25.139390 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:25.144876 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:25.507835 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:25.514486 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:25.638835 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:25.644465 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:26.044461 1903322 node_ready.go:49] node "addons-457090" has status "Ready":"True"
	I0429 14:08:26.044497 1903322 node_ready.go:38] duration metric: took 30.043019374s for node "addons-457090" to be "Ready" ...
	I0429 14:08:26.044507 1903322 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 14:08:26.067267 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:26.069808 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:26.080150 1903322 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-8c59t" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:26.140707 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:26.148265 1903322 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0429 14:08:26.148288 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:26.508028 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:26.515823 1903322 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0429 14:08:26.515850 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:26.640555 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:26.647268 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:27.039801 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:27.046087 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:27.140638 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:27.147048 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:27.512998 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:27.517960 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:27.639472 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:27.646819 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:28.013410 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:28.019266 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:28.093190 1903322 pod_ready.go:102] pod "coredns-7db6d8ff4d-8c59t" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:28.140117 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:28.148431 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:28.507119 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:28.515433 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:28.638889 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:28.647310 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:29.008536 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:29.026234 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:29.088360 1903322 pod_ready.go:92] pod "coredns-7db6d8ff4d-8c59t" in "kube-system" namespace has status "Ready":"True"
	I0429 14:08:29.088429 1903322 pod_ready.go:81] duration metric: took 3.008244056s for pod "coredns-7db6d8ff4d-8c59t" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.088466 1903322 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-457090" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.103390 1903322 pod_ready.go:92] pod "etcd-addons-457090" in "kube-system" namespace has status "Ready":"True"
	I0429 14:08:29.103463 1903322 pod_ready.go:81] duration metric: took 14.975458ms for pod "etcd-addons-457090" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.103492 1903322 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-457090" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.123660 1903322 pod_ready.go:92] pod "kube-apiserver-addons-457090" in "kube-system" namespace has status "Ready":"True"
	I0429 14:08:29.123743 1903322 pod_ready.go:81] duration metric: took 20.229695ms for pod "kube-apiserver-addons-457090" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.123772 1903322 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-457090" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.142397 1903322 pod_ready.go:92] pod "kube-controller-manager-addons-457090" in "kube-system" namespace has status "Ready":"True"
	I0429 14:08:29.142426 1903322 pod_ready.go:81] duration metric: took 18.62072ms for pod "kube-controller-manager-addons-457090" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.142439 1903322 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6wf6b" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.152851 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:29.161162 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:29.172163 1903322 pod_ready.go:92] pod "kube-proxy-6wf6b" in "kube-system" namespace has status "Ready":"True"
	I0429 14:08:29.172186 1903322 pod_ready.go:81] duration metric: took 29.739135ms for pod "kube-proxy-6wf6b" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.172199 1903322 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-457090" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.483970 1903322 pod_ready.go:92] pod "kube-scheduler-addons-457090" in "kube-system" namespace has status "Ready":"True"
	I0429 14:08:29.484041 1903322 pod_ready.go:81] duration metric: took 311.833609ms for pod "kube-scheduler-addons-457090" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.484067 1903322 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace to be "Ready" ...
	I0429 14:08:29.507572 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:29.515762 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:29.639709 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:29.647914 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:30.030657 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:30.031622 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:30.139210 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:30.146867 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:30.508269 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:30.515436 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:30.638957 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:30.646445 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:31.008947 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:31.027309 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:31.140632 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:31.151130 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:31.491722 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:31.509796 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:31.517535 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:31.639600 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:31.651727 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:32.008743 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:32.016128 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:32.139917 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:32.147717 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:32.508310 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:32.515351 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:32.638741 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:32.645649 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:33.010766 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:33.018472 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:33.139588 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:33.147014 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:33.507288 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:33.517131 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:33.638902 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:33.647073 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:33.994894 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:34.008296 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:34.016905 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:34.140222 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:34.146657 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:34.523155 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:34.530753 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:34.639183 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:34.646152 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:35.010738 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:35.023576 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:35.139261 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:35.147117 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:35.507780 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:35.515142 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:35.639070 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:35.647289 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:36.008203 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:36.016100 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:36.143342 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:36.148145 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:36.492145 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:36.509774 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:36.519276 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:36.639238 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:36.658882 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:37.012813 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:37.017175 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:37.140889 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:37.151210 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:37.507863 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:37.516228 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:37.640061 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:37.654334 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:38.008084 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:38.015762 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:38.139682 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:38.148821 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:38.507665 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:38.517222 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:38.639793 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:38.646473 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:38.990320 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:39.007877 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:39.015737 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:39.139279 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:39.146250 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:39.514899 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:39.520734 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:39.640149 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:39.647831 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:40.010518 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:40.025994 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:40.140052 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:40.148553 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:40.521677 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:40.532967 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:40.639968 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:40.647031 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:41.008845 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:41.015996 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:41.139759 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:41.147657 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:41.494119 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:41.512731 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:41.520265 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:41.639045 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:41.647144 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:42.008345 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:42.017627 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:42.140522 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:42.148994 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:42.509151 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:42.517060 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:42.650783 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:42.659765 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:43.012076 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:43.052371 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:43.139112 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:43.150490 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:43.517010 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:43.523087 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:43.640000 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:43.648180 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:43.989914 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:44.007867 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:44.016471 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:44.139455 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:44.146397 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:44.507239 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:44.516127 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:44.639360 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:44.645790 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:45.008573 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:45.016744 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:45.155175 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:45.175149 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:45.511595 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:45.516184 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:45.640402 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:45.651739 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:45.991087 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:46.007594 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:46.016198 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:46.140009 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:46.146526 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:46.517392 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:46.518243 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:46.641554 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:46.648488 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:47.008318 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:47.016413 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:47.140654 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:47.148267 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:47.507696 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:47.516707 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:47.640970 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:47.649665 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:47.999897 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:48.013694 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:48.042562 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:48.140071 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:48.156046 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:48.517233 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:48.523942 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:48.643774 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:48.649773 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:49.013249 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:49.043986 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:49.139589 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:49.146587 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:49.518656 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:49.524650 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:49.639111 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:49.648034 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:50.017849 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:50.020597 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:50.140216 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:50.148972 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:50.490895 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:50.508487 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:50.525061 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:50.639759 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:50.650033 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:51.020179 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:51.029921 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:51.140571 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:51.149090 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:51.552119 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:51.560720 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:51.640484 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:51.655071 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:52.008141 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:52.016284 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:52.139669 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:52.147674 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:52.507166 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:52.511472 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:52.517233 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:52.640009 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:52.647647 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:53.008089 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:53.017412 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 14:08:53.139321 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:53.146579 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:53.509685 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:53.518295 1903322 kapi.go:107] duration metric: took 57.507506632s to wait for kubernetes.io/minikube-addons=registry ...
	I0429 14:08:53.640421 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:53.652140 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:54.008999 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:54.141025 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:54.151587 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:54.507563 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:54.639154 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:54.646347 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:54.990348 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:55.008793 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:55.139380 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:55.146745 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:55.508177 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:55.644297 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:55.648322 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:56.008573 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:56.139684 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:56.151071 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:56.522894 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:56.639387 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:56.659229 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:56.991108 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:57.008803 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:57.140625 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:57.149556 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:57.510060 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:57.641199 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:57.663306 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:58.008986 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:58.139748 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:58.148260 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:58.512494 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:58.638893 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:58.646650 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:59.007213 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:59.139579 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:59.145929 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:08:59.490378 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:08:59.507586 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:08:59.638895 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:08:59.646516 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:00.015312 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:00.150281 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:00.164775 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:00.520597 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:00.639656 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:00.648651 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:01.007795 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:01.139558 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:01.147863 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:01.491990 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:01.507068 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:01.639467 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:01.647552 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:02.009413 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:02.138757 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:02.151143 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:02.516098 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:02.640087 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:02.646177 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:03.008503 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:03.139339 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:03.147774 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:03.500345 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:03.513139 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:03.643637 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:03.656545 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:04.009055 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:04.139773 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:04.150052 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:04.507326 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:04.638597 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:04.646794 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:05.007683 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:05.139247 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:05.147652 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:05.511953 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:05.639303 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:05.647798 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:05.991312 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:06.008445 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:06.139758 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:06.152922 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:06.521280 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:06.645371 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:06.651703 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:07.008121 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:07.139660 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:07.146795 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:07.524140 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:07.640350 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:07.647536 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:08.011325 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:08.143024 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:08.151350 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:08.491790 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:08.515336 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:08.639314 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:08.647397 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:09.008058 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:09.140088 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:09.150748 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:09.520409 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:09.639539 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:09.648719 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:10.020093 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:10.140192 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:10.151764 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:10.507742 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:10.639527 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:10.646669 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:10.990747 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:11.008338 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:11.138764 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:11.146495 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:11.507914 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:11.639437 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:11.646259 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:12.008372 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:12.139327 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:12.146674 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:12.510741 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:12.639244 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:12.649794 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:13.008901 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:13.143842 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:13.153012 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:13.490791 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:13.508000 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:13.639277 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 14:09:13.646854 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:14.008179 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:14.140199 1903322 kapi.go:107] duration metric: took 1m13.004776545s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0429 14:09:14.143250 1903322 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-457090 cluster.
	I0429 14:09:14.145570 1903322 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0429 14:09:14.147631 1903322 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0429 14:09:14.149831 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:14.513227 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:14.646964 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:15.008855 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:15.147306 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:15.492302 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:15.507570 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:15.647566 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:16.007916 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:16.146918 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:16.518043 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:16.657937 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:17.008506 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:17.148455 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:17.516289 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:17.647042 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:17.991233 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:18.008533 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:18.148065 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:18.508543 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:18.647751 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:19.007905 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:19.153322 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:19.510409 1903322 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 14:09:19.653298 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:19.997800 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:20.017589 1903322 kapi.go:107] duration metric: took 1m24.017190607s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0429 14:09:20.151989 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:20.647141 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:21.150236 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:21.646912 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:21.998511 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:22.146778 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:22.647046 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:23.146860 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:23.647269 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:24.021366 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:24.148960 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:24.645830 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:25.146843 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:25.648512 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:26.147564 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:26.492339 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:26.661342 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:27.146844 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:27.646664 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:28.148459 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:28.494577 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:28.647414 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:29.146552 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:29.646341 1903322 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 14:09:30.147073 1903322 kapi.go:107] duration metric: took 1m33.506311851s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0429 14:09:30.150713 1903322 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0429 14:09:30.152876 1903322 addons.go:505] duration metric: took 1m40.241163798s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner storage-provisioner-rancher inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0429 14:09:30.990475 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:33.489517 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:35.490130 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:37.490807 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:39.491004 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:41.491681 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:43.990261 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:46.491983 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:48.991487 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:51.490057 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:53.490838 1903322 pod_ready.go:102] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"False"
	I0429 14:09:54.490716 1903322 pod_ready.go:92] pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace has status "Ready":"True"
	I0429 14:09:54.490746 1903322 pod_ready.go:81] duration metric: took 1m25.006658025s for pod "metrics-server-c59844bb4-hltz2" in "kube-system" namespace to be "Ready" ...
	I0429 14:09:54.490758 1903322 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-b6fbn" in "kube-system" namespace to be "Ready" ...
	I0429 14:09:54.501917 1903322 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-b6fbn" in "kube-system" namespace has status "Ready":"True"
	I0429 14:09:54.501943 1903322 pod_ready.go:81] duration metric: took 11.177606ms for pod "nvidia-device-plugin-daemonset-b6fbn" in "kube-system" namespace to be "Ready" ...
	I0429 14:09:54.501964 1903322 pod_ready.go:38] duration metric: took 1m28.457444913s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 14:09:54.501980 1903322 api_server.go:52] waiting for apiserver process to appear ...
	I0429 14:09:54.502013 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 14:09:54.502078 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 14:09:54.553780 1903322 cri.go:89] found id: "8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93"
	I0429 14:09:54.553804 1903322 cri.go:89] found id: ""
	I0429 14:09:54.553812 1903322 logs.go:276] 1 containers: [8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93]
	I0429 14:09:54.553886 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:09:54.558061 1903322 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 14:09:54.558175 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 14:09:54.602110 1903322 cri.go:89] found id: "3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c"
	I0429 14:09:54.602130 1903322 cri.go:89] found id: ""
	I0429 14:09:54.602138 1903322 logs.go:276] 1 containers: [3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c]
	I0429 14:09:54.602211 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:09:54.605641 1903322 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 14:09:54.605735 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 14:09:54.649461 1903322 cri.go:89] found id: "a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5"
	I0429 14:09:54.649485 1903322 cri.go:89] found id: ""
	I0429 14:09:54.649493 1903322 logs.go:276] 1 containers: [a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5]
	I0429 14:09:54.649546 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:09:54.653026 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 14:09:54.653123 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 14:09:54.693830 1903322 cri.go:89] found id: "dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93"
	I0429 14:09:54.693853 1903322 cri.go:89] found id: ""
	I0429 14:09:54.693861 1903322 logs.go:276] 1 containers: [dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93]
	I0429 14:09:54.693936 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:09:54.698361 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 14:09:54.698433 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 14:09:54.737632 1903322 cri.go:89] found id: "99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67"
	I0429 14:09:54.737653 1903322 cri.go:89] found id: ""
	I0429 14:09:54.737660 1903322 logs.go:276] 1 containers: [99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67]
	I0429 14:09:54.737725 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:09:54.741122 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 14:09:54.741188 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 14:09:54.777864 1903322 cri.go:89] found id: "99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034"
	I0429 14:09:54.777894 1903322 cri.go:89] found id: ""
	I0429 14:09:54.777902 1903322 logs.go:276] 1 containers: [99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034]
	I0429 14:09:54.777957 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:09:54.781446 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 14:09:54.781510 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 14:09:54.818099 1903322 cri.go:89] found id: "0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc"
	I0429 14:09:54.818121 1903322 cri.go:89] found id: ""
	I0429 14:09:54.818130 1903322 logs.go:276] 1 containers: [0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc]
	I0429 14:09:54.818184 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:09:54.821891 1903322 logs.go:123] Gathering logs for kindnet [0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc] ...
	I0429 14:09:54.821927 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc"
	I0429 14:09:54.865904 1903322 logs.go:123] Gathering logs for CRI-O ...
	I0429 14:09:54.865930 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 14:09:54.957217 1903322 logs.go:123] Gathering logs for kubelet ...
	I0429 14:09:54.957254 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 14:09:55.016566 1903322 logs.go:138] Found kubelet problem: Apr 29 14:08:26 addons-457090 kubelet[1499]: W0429 14:08:26.020696    1499 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	W0429 14:09:55.016790 1903322 logs.go:138] Found kubelet problem: Apr 29 14:08:26 addons-457090 kubelet[1499]: E0429 14:08:26.020738    1499 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	I0429 14:09:55.050896 1903322 logs.go:123] Gathering logs for dmesg ...
	I0429 14:09:55.050931 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 14:09:55.073566 1903322 logs.go:123] Gathering logs for etcd [3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c] ...
	I0429 14:09:55.073599 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c"
	I0429 14:09:55.127319 1903322 logs.go:123] Gathering logs for coredns [a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5] ...
	I0429 14:09:55.127352 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5"
	I0429 14:09:55.168253 1903322 logs.go:123] Gathering logs for kube-scheduler [dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93] ...
	I0429 14:09:55.168290 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93"
	I0429 14:09:55.206761 1903322 logs.go:123] Gathering logs for kube-controller-manager [99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034] ...
	I0429 14:09:55.206789 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034"
	I0429 14:09:55.292396 1903322 logs.go:123] Gathering logs for container status ...
	I0429 14:09:55.292437 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 14:09:55.341726 1903322 logs.go:123] Gathering logs for describe nodes ...
	I0429 14:09:55.341758 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 14:09:55.516443 1903322 logs.go:123] Gathering logs for kube-apiserver [8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93] ...
	I0429 14:09:55.516476 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93"
	I0429 14:09:55.570609 1903322 logs.go:123] Gathering logs for kube-proxy [99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67] ...
	I0429 14:09:55.570647 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67"
	I0429 14:09:55.608397 1903322 out.go:304] Setting ErrFile to fd 2...
	I0429 14:09:55.608420 1903322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 14:09:55.608481 1903322 out.go:239] X Problems detected in kubelet:
	W0429 14:09:55.608492 1903322 out.go:239]   Apr 29 14:08:26 addons-457090 kubelet[1499]: W0429 14:08:26.020696    1499 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	W0429 14:09:55.608505 1903322 out.go:239]   Apr 29 14:08:26 addons-457090 kubelet[1499]: E0429 14:08:26.020738    1499 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	I0429 14:09:55.608513 1903322 out.go:304] Setting ErrFile to fd 2...
	I0429 14:09:55.608523 1903322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:10:05.610023 1903322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 14:10:05.623923 1903322 api_server.go:72] duration metric: took 2m15.712477396s to wait for apiserver process to appear ...
	I0429 14:10:05.623947 1903322 api_server.go:88] waiting for apiserver healthz status ...
	I0429 14:10:05.623982 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 14:10:05.624041 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 14:10:05.660696 1903322 cri.go:89] found id: "8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93"
	I0429 14:10:05.660716 1903322 cri.go:89] found id: ""
	I0429 14:10:05.660724 1903322 logs.go:276] 1 containers: [8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93]
	I0429 14:10:05.660789 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:05.664360 1903322 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 14:10:05.664424 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 14:10:05.704736 1903322 cri.go:89] found id: "3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c"
	I0429 14:10:05.704759 1903322 cri.go:89] found id: ""
	I0429 14:10:05.704768 1903322 logs.go:276] 1 containers: [3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c]
	I0429 14:10:05.704825 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:05.708530 1903322 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 14:10:05.708603 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 14:10:05.747689 1903322 cri.go:89] found id: "a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5"
	I0429 14:10:05.747708 1903322 cri.go:89] found id: ""
	I0429 14:10:05.747717 1903322 logs.go:276] 1 containers: [a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5]
	I0429 14:10:05.747784 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:05.751408 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 14:10:05.751476 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 14:10:05.792586 1903322 cri.go:89] found id: "dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93"
	I0429 14:10:05.792608 1903322 cri.go:89] found id: ""
	I0429 14:10:05.792615 1903322 logs.go:276] 1 containers: [dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93]
	I0429 14:10:05.792682 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:05.796183 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 14:10:05.796259 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 14:10:05.838053 1903322 cri.go:89] found id: "99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67"
	I0429 14:10:05.838074 1903322 cri.go:89] found id: ""
	I0429 14:10:05.838082 1903322 logs.go:276] 1 containers: [99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67]
	I0429 14:10:05.838138 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:05.841960 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 14:10:05.842031 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 14:10:05.883585 1903322 cri.go:89] found id: "99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034"
	I0429 14:10:05.883606 1903322 cri.go:89] found id: ""
	I0429 14:10:05.883614 1903322 logs.go:276] 1 containers: [99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034]
	I0429 14:10:05.883671 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:05.887338 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 14:10:05.887438 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 14:10:05.927908 1903322 cri.go:89] found id: "0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc"
	I0429 14:10:05.927931 1903322 cri.go:89] found id: ""
	I0429 14:10:05.927939 1903322 logs.go:276] 1 containers: [0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc]
	I0429 14:10:05.928029 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:05.931551 1903322 logs.go:123] Gathering logs for describe nodes ...
	I0429 14:10:05.931576 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 14:10:06.086357 1903322 logs.go:123] Gathering logs for etcd [3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c] ...
	I0429 14:10:06.086394 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c"
	I0429 14:10:06.136506 1903322 logs.go:123] Gathering logs for coredns [a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5] ...
	I0429 14:10:06.136540 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5"
	I0429 14:10:06.178980 1903322 logs.go:123] Gathering logs for kube-scheduler [dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93] ...
	I0429 14:10:06.179009 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93"
	I0429 14:10:06.221801 1903322 logs.go:123] Gathering logs for kube-proxy [99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67] ...
	I0429 14:10:06.221831 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67"
	I0429 14:10:06.269496 1903322 logs.go:123] Gathering logs for kube-controller-manager [99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034] ...
	I0429 14:10:06.269525 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034"
	I0429 14:10:06.341263 1903322 logs.go:123] Gathering logs for CRI-O ...
	I0429 14:10:06.341306 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 14:10:06.438392 1903322 logs.go:123] Gathering logs for kubelet ...
	I0429 14:10:06.438433 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 14:10:06.491821 1903322 logs.go:138] Found kubelet problem: Apr 29 14:08:26 addons-457090 kubelet[1499]: W0429 14:08:26.020696    1499 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	W0429 14:10:06.492033 1903322 logs.go:138] Found kubelet problem: Apr 29 14:08:26 addons-457090 kubelet[1499]: E0429 14:08:26.020738    1499 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	I0429 14:10:06.526818 1903322 logs.go:123] Gathering logs for container status ...
	I0429 14:10:06.526847 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 14:10:06.583368 1903322 logs.go:123] Gathering logs for kube-apiserver [8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93] ...
	I0429 14:10:06.583399 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93"
	I0429 14:10:06.639776 1903322 logs.go:123] Gathering logs for kindnet [0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc] ...
	I0429 14:10:06.639815 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc"
	I0429 14:10:06.687108 1903322 logs.go:123] Gathering logs for dmesg ...
	I0429 14:10:06.687137 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 14:10:06.707252 1903322 out.go:304] Setting ErrFile to fd 2...
	I0429 14:10:06.707289 1903322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 14:10:06.707463 1903322 out.go:239] X Problems detected in kubelet:
	W0429 14:10:06.707483 1903322 out.go:239]   Apr 29 14:08:26 addons-457090 kubelet[1499]: W0429 14:08:26.020696    1499 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	W0429 14:10:06.707518 1903322 out.go:239]   Apr 29 14:08:26 addons-457090 kubelet[1499]: E0429 14:08:26.020738    1499 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	I0429 14:10:06.707534 1903322 out.go:304] Setting ErrFile to fd 2...
	I0429 14:10:06.707541 1903322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:10:16.708970 1903322 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:10:16.716560 1903322 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0429 14:10:16.717569 1903322 api_server.go:141] control plane version: v1.30.0
	I0429 14:10:16.717603 1903322 api_server.go:131] duration metric: took 11.093647819s to wait for apiserver health ...
	I0429 14:10:16.717612 1903322 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 14:10:16.717634 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 14:10:16.717695 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 14:10:16.756049 1903322 cri.go:89] found id: "8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93"
	I0429 14:10:16.756073 1903322 cri.go:89] found id: ""
	I0429 14:10:16.756082 1903322 logs.go:276] 1 containers: [8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93]
	I0429 14:10:16.756140 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:16.759590 1903322 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 14:10:16.759664 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 14:10:16.797693 1903322 cri.go:89] found id: "3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c"
	I0429 14:10:16.797713 1903322 cri.go:89] found id: ""
	I0429 14:10:16.797721 1903322 logs.go:276] 1 containers: [3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c]
	I0429 14:10:16.797777 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:16.801270 1903322 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 14:10:16.801353 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 14:10:16.838206 1903322 cri.go:89] found id: "a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5"
	I0429 14:10:16.838232 1903322 cri.go:89] found id: ""
	I0429 14:10:16.838240 1903322 logs.go:276] 1 containers: [a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5]
	I0429 14:10:16.838297 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:16.841894 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 14:10:16.841963 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 14:10:16.880739 1903322 cri.go:89] found id: "dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93"
	I0429 14:10:16.880760 1903322 cri.go:89] found id: ""
	I0429 14:10:16.880768 1903322 logs.go:276] 1 containers: [dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93]
	I0429 14:10:16.880832 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:16.884327 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 14:10:16.884391 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 14:10:16.923260 1903322 cri.go:89] found id: "99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67"
	I0429 14:10:16.923349 1903322 cri.go:89] found id: ""
	I0429 14:10:16.923384 1903322 logs.go:276] 1 containers: [99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67]
	I0429 14:10:16.923478 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:16.927178 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 14:10:16.927252 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 14:10:16.965465 1903322 cri.go:89] found id: "99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034"
	I0429 14:10:16.965485 1903322 cri.go:89] found id: ""
	I0429 14:10:16.965493 1903322 logs.go:276] 1 containers: [99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034]
	I0429 14:10:16.965547 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:16.969241 1903322 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 14:10:16.969311 1903322 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 14:10:17.018546 1903322 cri.go:89] found id: "0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc"
	I0429 14:10:17.018568 1903322 cri.go:89] found id: ""
	I0429 14:10:17.018576 1903322 logs.go:276] 1 containers: [0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc]
	I0429 14:10:17.018633 1903322 ssh_runner.go:195] Run: which crictl
	I0429 14:10:17.026816 1903322 logs.go:123] Gathering logs for dmesg ...
	I0429 14:10:17.026840 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 14:10:17.045853 1903322 logs.go:123] Gathering logs for etcd [3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c] ...
	I0429 14:10:17.045883 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c"
	I0429 14:10:17.096278 1903322 logs.go:123] Gathering logs for kube-proxy [99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67] ...
	I0429 14:10:17.096310 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67"
	I0429 14:10:17.136366 1903322 logs.go:123] Gathering logs for kindnet [0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc] ...
	I0429 14:10:17.136397 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc"
	I0429 14:10:17.182834 1903322 logs.go:123] Gathering logs for kubelet ...
	I0429 14:10:17.182862 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0429 14:10:17.207800 1903322 logs.go:138] Found kubelet problem: Apr 29 14:08:26 addons-457090 kubelet[1499]: W0429 14:08:26.020696    1499 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	W0429 14:10:17.208005 1903322 logs.go:138] Found kubelet problem: Apr 29 14:08:26 addons-457090 kubelet[1499]: E0429 14:08:26.020738    1499 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	I0429 14:10:17.262314 1903322 logs.go:123] Gathering logs for kube-apiserver [8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93] ...
	I0429 14:10:17.262350 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93"
	I0429 14:10:17.334510 1903322 logs.go:123] Gathering logs for coredns [a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5] ...
	I0429 14:10:17.334548 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5"
	I0429 14:10:17.375266 1903322 logs.go:123] Gathering logs for kube-scheduler [dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93] ...
	I0429 14:10:17.375296 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93"
	I0429 14:10:17.412134 1903322 logs.go:123] Gathering logs for kube-controller-manager [99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034] ...
	I0429 14:10:17.412162 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034"
	I0429 14:10:17.480212 1903322 logs.go:123] Gathering logs for CRI-O ...
	I0429 14:10:17.480249 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 14:10:17.579098 1903322 logs.go:123] Gathering logs for container status ...
	I0429 14:10:17.579137 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 14:10:17.630146 1903322 logs.go:123] Gathering logs for describe nodes ...
	I0429 14:10:17.630179 1903322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 14:10:17.765303 1903322 out.go:304] Setting ErrFile to fd 2...
	I0429 14:10:17.765329 1903322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0429 14:10:17.765382 1903322 out.go:239] X Problems detected in kubelet:
	W0429 14:10:17.765391 1903322 out.go:239]   Apr 29 14:08:26 addons-457090 kubelet[1499]: W0429 14:08:26.020696    1499 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	W0429 14:10:17.765399 1903322 out.go:239]   Apr 29 14:08:26 addons-457090 kubelet[1499]: E0429 14:08:26.020738    1499 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-457090" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-457090' and this object
	I0429 14:10:17.765412 1903322 out.go:304] Setting ErrFile to fd 2...
	I0429 14:10:17.765419 1903322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:10:27.776417 1903322 system_pods.go:59] 18 kube-system pods found
	I0429 14:10:27.776453 1903322 system_pods.go:61] "coredns-7db6d8ff4d-8c59t" [6db81098-176e-4ea8-b78f-36bbcc52095f] Running
	I0429 14:10:27.776460 1903322 system_pods.go:61] "csi-hostpath-attacher-0" [62cdde81-fe62-4de1-817e-071809366cc1] Running
	I0429 14:10:27.776464 1903322 system_pods.go:61] "csi-hostpath-resizer-0" [107e3b64-e1dd-4011-b5b1-dfccb55c7ee4] Running
	I0429 14:10:27.776469 1903322 system_pods.go:61] "csi-hostpathplugin-pdrr9" [e6a7f56a-7b70-452a-980b-3db7b5e261c1] Running
	I0429 14:10:27.776473 1903322 system_pods.go:61] "etcd-addons-457090" [b193ac16-9a2e-4f2c-a710-09df74520cce] Running
	I0429 14:10:27.776477 1903322 system_pods.go:61] "kindnet-tvhsm" [4efdf177-7bfb-4e88-a045-4b64aad67f6a] Running
	I0429 14:10:27.776481 1903322 system_pods.go:61] "kube-apiserver-addons-457090" [5eb570ba-cbb2-4426-8cdd-7d80c357c572] Running
	I0429 14:10:27.776486 1903322 system_pods.go:61] "kube-controller-manager-addons-457090" [8a95a1b5-02f1-407d-8309-30984a5e118b] Running
	I0429 14:10:27.776526 1903322 system_pods.go:61] "kube-ingress-dns-minikube" [757aca19-0d56-4052-975e-6621832dc1b4] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0429 14:10:27.776545 1903322 system_pods.go:61] "kube-proxy-6wf6b" [d2a0a51b-b9e6-4e8c-b402-97a2fc9400ed] Running
	I0429 14:10:27.776567 1903322 system_pods.go:61] "kube-scheduler-addons-457090" [7949e7cb-686e-48b8-a52e-ce65e292de69] Running
	I0429 14:10:27.776586 1903322 system_pods.go:61] "metrics-server-c59844bb4-hltz2" [aedce136-b59d-41a1-83ba-037b4f9e9302] Running
	I0429 14:10:27.776614 1903322 system_pods.go:61] "nvidia-device-plugin-daemonset-b6fbn" [d72d7bb4-220a-44af-9b8f-8b406f53e814] Running
	I0429 14:10:27.776632 1903322 system_pods.go:61] "registry-proxy-96wq6" [1b8f503a-0540-4820-bd92-04b584ad56fb] Running
	I0429 14:10:27.776650 1903322 system_pods.go:61] "registry-zhb4n" [9abf552b-43fc-4cf4-968b-c3f3be943f93] Running
	I0429 14:10:27.776688 1903322 system_pods.go:61] "snapshot-controller-745499f584-q2bjz" [914ec43f-98bf-4718-9e42-59612fcf4a7b] Running
	I0429 14:10:27.776715 1903322 system_pods.go:61] "snapshot-controller-745499f584-qkwpt" [38e18f20-7e75-42ce-a983-3db45cab9efb] Running
	I0429 14:10:27.776734 1903322 system_pods.go:61] "storage-provisioner" [d4b9907a-2a43-4ebd-971b-85c4ac8c9969] Running
	I0429 14:10:27.776755 1903322 system_pods.go:74] duration metric: took 11.059136577s to wait for pod list to return data ...
	I0429 14:10:27.776776 1903322 default_sa.go:34] waiting for default service account to be created ...
	I0429 14:10:27.779153 1903322 default_sa.go:45] found service account: "default"
	I0429 14:10:27.779179 1903322 default_sa.go:55] duration metric: took 2.368305ms for default service account to be created ...
	I0429 14:10:27.779188 1903322 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 14:10:27.789203 1903322 system_pods.go:86] 18 kube-system pods found
	I0429 14:10:27.789238 1903322 system_pods.go:89] "coredns-7db6d8ff4d-8c59t" [6db81098-176e-4ea8-b78f-36bbcc52095f] Running
	I0429 14:10:27.789245 1903322 system_pods.go:89] "csi-hostpath-attacher-0" [62cdde81-fe62-4de1-817e-071809366cc1] Running
	I0429 14:10:27.789251 1903322 system_pods.go:89] "csi-hostpath-resizer-0" [107e3b64-e1dd-4011-b5b1-dfccb55c7ee4] Running
	I0429 14:10:27.789255 1903322 system_pods.go:89] "csi-hostpathplugin-pdrr9" [e6a7f56a-7b70-452a-980b-3db7b5e261c1] Running
	I0429 14:10:27.789259 1903322 system_pods.go:89] "etcd-addons-457090" [b193ac16-9a2e-4f2c-a710-09df74520cce] Running
	I0429 14:10:27.789265 1903322 system_pods.go:89] "kindnet-tvhsm" [4efdf177-7bfb-4e88-a045-4b64aad67f6a] Running
	I0429 14:10:27.789270 1903322 system_pods.go:89] "kube-apiserver-addons-457090" [5eb570ba-cbb2-4426-8cdd-7d80c357c572] Running
	I0429 14:10:27.789274 1903322 system_pods.go:89] "kube-controller-manager-addons-457090" [8a95a1b5-02f1-407d-8309-30984a5e118b] Running
	I0429 14:10:27.789318 1903322 system_pods.go:89] "kube-ingress-dns-minikube" [757aca19-0d56-4052-975e-6621832dc1b4] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0429 14:10:27.789330 1903322 system_pods.go:89] "kube-proxy-6wf6b" [d2a0a51b-b9e6-4e8c-b402-97a2fc9400ed] Running
	I0429 14:10:27.789336 1903322 system_pods.go:89] "kube-scheduler-addons-457090" [7949e7cb-686e-48b8-a52e-ce65e292de69] Running
	I0429 14:10:27.789344 1903322 system_pods.go:89] "metrics-server-c59844bb4-hltz2" [aedce136-b59d-41a1-83ba-037b4f9e9302] Running
	I0429 14:10:27.789359 1903322 system_pods.go:89] "nvidia-device-plugin-daemonset-b6fbn" [d72d7bb4-220a-44af-9b8f-8b406f53e814] Running
	I0429 14:10:27.789364 1903322 system_pods.go:89] "registry-proxy-96wq6" [1b8f503a-0540-4820-bd92-04b584ad56fb] Running
	I0429 14:10:27.789367 1903322 system_pods.go:89] "registry-zhb4n" [9abf552b-43fc-4cf4-968b-c3f3be943f93] Running
	I0429 14:10:27.789371 1903322 system_pods.go:89] "snapshot-controller-745499f584-q2bjz" [914ec43f-98bf-4718-9e42-59612fcf4a7b] Running
	I0429 14:10:27.789376 1903322 system_pods.go:89] "snapshot-controller-745499f584-qkwpt" [38e18f20-7e75-42ce-a983-3db45cab9efb] Running
	I0429 14:10:27.789478 1903322 system_pods.go:89] "storage-provisioner" [d4b9907a-2a43-4ebd-971b-85c4ac8c9969] Running
	I0429 14:10:27.789495 1903322 system_pods.go:126] duration metric: took 10.300369ms to wait for k8s-apps to be running ...
	I0429 14:10:27.789504 1903322 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 14:10:27.789579 1903322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 14:10:27.802063 1903322 system_svc.go:56] duration metric: took 12.54929ms WaitForService to wait for kubelet
	I0429 14:10:27.802107 1903322 kubeadm.go:576] duration metric: took 2m37.890665339s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 14:10:27.802127 1903322 node_conditions.go:102] verifying NodePressure condition ...
	I0429 14:10:27.805378 1903322 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0429 14:10:27.805412 1903322 node_conditions.go:123] node cpu capacity is 2
	I0429 14:10:27.805425 1903322 node_conditions.go:105] duration metric: took 3.291886ms to run NodePressure ...
	I0429 14:10:27.805438 1903322 start.go:240] waiting for startup goroutines ...
	I0429 14:10:27.805446 1903322 start.go:245] waiting for cluster config update ...
	I0429 14:10:27.805464 1903322 start.go:254] writing updated cluster config ...
	I0429 14:10:27.805764 1903322 ssh_runner.go:195] Run: rm -f paused
	I0429 14:10:28.135023 1903322 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 14:10:28.137397 1903322 out.go:177] * Done! kubectl is now configured to use "addons-457090" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 29 14:15:45 addons-457090 crio[925]: time="2024-04-29 14:15:45.227775287Z" level=warning msg="Allowed annotations are specified for workload []"
	Apr 29 14:15:45 addons-457090 crio[925]: time="2024-04-29 14:15:45.320396429Z" level=info msg="Created container 2b1dc8f9e8800738aec5ad012624a19ea225c7bb5fcb0d4f97e8bc365c5a8368: default/hello-world-app-86c47465fc-z6j5z/hello-world-app" id=53ac2f89-e54c-4c13-b795-a1898f08715c name=/runtime.v1.RuntimeService/CreateContainer
	Apr 29 14:15:45 addons-457090 crio[925]: time="2024-04-29 14:15:45.321792039Z" level=info msg="Starting container: 2b1dc8f9e8800738aec5ad012624a19ea225c7bb5fcb0d4f97e8bc365c5a8368" id=031afcd6-e099-43ef-bd05-2f4efe7b3677 name=/runtime.v1.RuntimeService/StartContainer
	Apr 29 14:15:45 addons-457090 crio[925]: time="2024-04-29 14:15:45.329128003Z" level=info msg="Started container" PID=8823 containerID=2b1dc8f9e8800738aec5ad012624a19ea225c7bb5fcb0d4f97e8bc365c5a8368 description=default/hello-world-app-86c47465fc-z6j5z/hello-world-app id=031afcd6-e099-43ef-bd05-2f4efe7b3677 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d30b14e260c4432f2e3b2fa6c2f3d7f91b15831cd41866e89957da7b1387d077
	Apr 29 14:15:45 addons-457090 conmon[8812]: conmon 2b1dc8f9e8800738aec5 <ninfo>: container 8823 exited with status 1
	Apr 29 14:15:45 addons-457090 crio[925]: time="2024-04-29 14:15:45.933494341Z" level=info msg="Removing container: 9ae16c688f0aa56c051a940f94920d89cf8d1087c63b5872a46f8fcc999e75e9" id=75174a6b-1a0f-4340-8443-93ecc789b6d9 name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 29 14:15:45 addons-457090 crio[925]: time="2024-04-29 14:15:45.956587313Z" level=info msg="Removed container 9ae16c688f0aa56c051a940f94920d89cf8d1087c63b5872a46f8fcc999e75e9: default/hello-world-app-86c47465fc-z6j5z/hello-world-app" id=75174a6b-1a0f-4340-8443-93ecc789b6d9 name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 29 14:17:18 addons-457090 crio[925]: time="2024-04-29 14:17:18.223359934Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=e7dc823b-4190-46bb-a289-96b7d2681363 name=/runtime.v1.ImageService/ImageStatus
	Apr 29 14:17:18 addons-457090 crio[925]: time="2024-04-29 14:17:18.223572781Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=e7dc823b-4190-46bb-a289-96b7d2681363 name=/runtime.v1.ImageService/ImageStatus
	Apr 29 14:17:18 addons-457090 crio[925]: time="2024-04-29 14:17:18.224441888Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=dd70793c-79a5-4492-87f8-970cda12c51f name=/runtime.v1.ImageService/ImageStatus
	Apr 29 14:17:18 addons-457090 crio[925]: time="2024-04-29 14:17:18.224597916Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=dd70793c-79a5-4492-87f8-970cda12c51f name=/runtime.v1.ImageService/ImageStatus
	Apr 29 14:17:18 addons-457090 crio[925]: time="2024-04-29 14:17:18.225373657Z" level=info msg="Creating container: default/hello-world-app-86c47465fc-z6j5z/hello-world-app" id=e1526b76-642d-49b8-b91c-8881a39f9641 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 29 14:17:18 addons-457090 crio[925]: time="2024-04-29 14:17:18.225468762Z" level=warning msg="Allowed annotations are specified for workload []"
	Apr 29 14:17:18 addons-457090 crio[925]: time="2024-04-29 14:17:18.290273063Z" level=info msg="Created container 95e536a505a8357839aa488703a4083597043b9628ba26b05ba846f6d2989916: default/hello-world-app-86c47465fc-z6j5z/hello-world-app" id=e1526b76-642d-49b8-b91c-8881a39f9641 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 29 14:17:18 addons-457090 crio[925]: time="2024-04-29 14:17:18.291136722Z" level=info msg="Starting container: 95e536a505a8357839aa488703a4083597043b9628ba26b05ba846f6d2989916" id=766448f2-dc1d-4fd1-9f84-254dcd60fce7 name=/runtime.v1.RuntimeService/StartContainer
	Apr 29 14:17:18 addons-457090 crio[925]: time="2024-04-29 14:17:18.297072316Z" level=info msg="Started container" PID=8885 containerID=95e536a505a8357839aa488703a4083597043b9628ba26b05ba846f6d2989916 description=default/hello-world-app-86c47465fc-z6j5z/hello-world-app id=766448f2-dc1d-4fd1-9f84-254dcd60fce7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d30b14e260c4432f2e3b2fa6c2f3d7f91b15831cd41866e89957da7b1387d077
	Apr 29 14:17:18 addons-457090 conmon[8874]: conmon 95e536a505a8357839aa <ninfo>: container 8885 exited with status 1
	Apr 29 14:17:19 addons-457090 crio[925]: time="2024-04-29 14:17:19.114101221Z" level=info msg="Removing container: 2b1dc8f9e8800738aec5ad012624a19ea225c7bb5fcb0d4f97e8bc365c5a8368" id=27a7174e-b429-4a61-940c-7c2a667710c7 name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 29 14:17:19 addons-457090 crio[925]: time="2024-04-29 14:17:19.143012209Z" level=info msg="Removed container 2b1dc8f9e8800738aec5ad012624a19ea225c7bb5fcb0d4f97e8bc365c5a8368: default/hello-world-app-86c47465fc-z6j5z/hello-world-app" id=27a7174e-b429-4a61-940c-7c2a667710c7 name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 29 14:17:23 addons-457090 crio[925]: time="2024-04-29 14:17:23.008494699Z" level=info msg="Stopping container: 2c1b29602737a8cbaaa76b5741956d16dce65c17fdf275f5a1e0351dac78d2fc (timeout: 30s)" id=82c5c4b5-2842-464b-9800-7dbb66e70784 name=/runtime.v1.RuntimeService/StopContainer
	Apr 29 14:17:24 addons-457090 crio[925]: time="2024-04-29 14:17:24.178723436Z" level=info msg="Stopped container 2c1b29602737a8cbaaa76b5741956d16dce65c17fdf275f5a1e0351dac78d2fc: kube-system/metrics-server-c59844bb4-hltz2/metrics-server" id=82c5c4b5-2842-464b-9800-7dbb66e70784 name=/runtime.v1.RuntimeService/StopContainer
	Apr 29 14:17:24 addons-457090 crio[925]: time="2024-04-29 14:17:24.179296446Z" level=info msg="Stopping pod sandbox: 654efe9363e2c445e0db0295f161b34c8760f997b65abca6c3075367b292239a" id=876c37a9-7eaf-4758-b592-353920cfae56 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 29 14:17:24 addons-457090 crio[925]: time="2024-04-29 14:17:24.179548802Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-hltz2 Namespace:kube-system ID:654efe9363e2c445e0db0295f161b34c8760f997b65abca6c3075367b292239a UID:aedce136-b59d-41a1-83ba-037b4f9e9302 NetNS:/var/run/netns/2260a325-fe6e-4790-98f0-8833e525cf13 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Apr 29 14:17:24 addons-457090 crio[925]: time="2024-04-29 14:17:24.179684268Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-hltz2 from CNI network \"kindnet\" (type=ptp)"
	Apr 29 14:17:24 addons-457090 crio[925]: time="2024-04-29 14:17:24.217617831Z" level=info msg="Stopped pod sandbox: 654efe9363e2c445e0db0295f161b34c8760f997b65abca6c3075367b292239a" id=876c37a9-7eaf-4758-b592-353920cfae56 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	95e536a505a83       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                        6 seconds ago       Exited              hello-world-app           5                   d30b14e260c44       hello-world-app-86c47465fc-z6j5z
	6fa1fc1423c4b       docker.io/library/nginx@sha256:1f37baf7373d386ee9de0437325ae3e0202a3959803fd79144fa0bb27e2b2801                         5 minutes ago       Running             nginx                     0                   ceb339f77d791       nginx
	c82ac5fdba3be       ghcr.io/headlamp-k8s/headlamp@sha256:1f277f42730106526a27560517a4c5f9253ccb2477be458986f44a791158a02c                   5 minutes ago       Running             headlamp                  0                   e2f2663794a20       headlamp-7559bf459f-2zx6r
	01984734ff3ae       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            8 minutes ago       Running             gcp-auth                  0                   c2b5b232a09e7       gcp-auth-5db96cd9b4-kc2nb
	3e92c4f46c7f9       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         8 minutes ago       Running             yakd                      0                   ecd6a09b31bd4       yakd-dashboard-5ddbf7d777-w8n26
	2c1b29602737a       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   8 minutes ago       Exited              metrics-server            0                   654efe9363e2c       metrics-server-c59844bb4-hltz2
	a81321961a1d8       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        8 minutes ago       Running             coredns                   0                   d8ae2e591bdd7       coredns-7db6d8ff4d-8c59t
	19a13a7429ba0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        8 minutes ago       Running             storage-provisioner       0                   ec5f74e8a7ef7       storage-provisioner
	99b8b7a1eee2f       cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f                                                        9 minutes ago       Running             kube-proxy                0                   040bbb1a390c0       kube-proxy-6wf6b
	0229dc76b7d0d       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d                                                        9 minutes ago       Running             kindnet-cni               0                   bf59d2ef2bcd4       kindnet-tvhsm
	dab35c23ea406       547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a                                                        9 minutes ago       Running             kube-scheduler            0                   444913ef40c4f       kube-scheduler-addons-457090
	99e1db8ae8156       68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1                                                        9 minutes ago       Running             kube-controller-manager   0                   a205372cd9d97       kube-controller-manager-addons-457090
	8d4c3f49a1645       181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb                                                        9 minutes ago       Running             kube-apiserver            0                   49dd90a3c25e0       kube-apiserver-addons-457090
	3401a97b7bbcb       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                        9 minutes ago       Running             etcd                      0                   cc6531ddd2c59       etcd-addons-457090
	
	
	==> coredns [a81321961a1d88d779f4a45057ad12a09a64edf2cb7fc52d6845b95f8d37d9f5] <==
	[INFO] 10.244.0.20:36330 - 52850 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000064024s
	[INFO] 10.244.0.20:36330 - 48400 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000075552s
	[INFO] 10.244.0.20:36330 - 45816 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000172972s
	[INFO] 10.244.0.20:36330 - 44843 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000156299s
	[INFO] 10.244.0.20:36330 - 7732 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002661908s
	[INFO] 10.244.0.20:36330 - 1424 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001547741s
	[INFO] 10.244.0.20:36330 - 20865 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075068s
	[INFO] 10.244.0.20:54365 - 22631 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000315551s
	[INFO] 10.244.0.20:57667 - 61744 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000076529s
	[INFO] 10.244.0.20:57667 - 24351 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000084709s
	[INFO] 10.244.0.20:57667 - 52193 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00007076s
	[INFO] 10.244.0.20:54365 - 33470 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000041485s
	[INFO] 10.244.0.20:54365 - 31504 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000239416s
	[INFO] 10.244.0.20:57667 - 10310 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037867s
	[INFO] 10.244.0.20:54365 - 17753 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000035569s
	[INFO] 10.244.0.20:54365 - 5323 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050691s
	[INFO] 10.244.0.20:54365 - 26961 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043545s
	[INFO] 10.244.0.20:57667 - 943 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044045s
	[INFO] 10.244.0.20:57667 - 22171 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045226s
	[INFO] 10.244.0.20:54365 - 19060 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002191552s
	[INFO] 10.244.0.20:57667 - 50546 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002016694s
	[INFO] 10.244.0.20:54365 - 59551 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001323922s
	[INFO] 10.244.0.20:57667 - 43293 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001405137s
	[INFO] 10.244.0.20:57667 - 8542 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049936s
	[INFO] 10.244.0.20:54365 - 3578 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000167991s
	
	
	==> describe nodes <==
	Name:               addons-457090
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-457090
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=94a01df0d4b48636d4af3d06a53be687e06c0844
	                    minikube.k8s.io/name=addons-457090
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T14_07_38_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-457090
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 14:07:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-457090
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 14:17:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 14:14:47 +0000   Mon, 29 Apr 2024 14:07:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 14:14:47 +0000   Mon, 29 Apr 2024 14:07:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 14:14:47 +0000   Mon, 29 Apr 2024 14:07:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 14:14:47 +0000   Mon, 29 Apr 2024 14:08:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-457090
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 a89a0467952a46398c09ced7a4180db6
	  System UUID:                e60b6db6-cc0a-43d1-8947-017d88d6eca3
	  Boot ID:                    b8f2360a-0b19-4e04-aa8c-604719eae8f1
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-z6j5z         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	  gcp-auth                    gcp-auth-5db96cd9b4-kc2nb                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  headlamp                    headlamp-7559bf459f-2zx6r                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 coredns-7db6d8ff4d-8c59t                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     9m32s
	  kube-system                 etcd-addons-457090                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         9m47s
	  kube-system                 kindnet-tvhsm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m32s
	  kube-system                 kube-apiserver-addons-457090             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m47s
	  kube-system                 kube-controller-manager-addons-457090    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m49s
	  kube-system                 kube-proxy-6wf6b                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	  kube-system                 kube-scheduler-addons-457090             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m47s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-w8n26          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     9m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m28s                  kube-proxy       
	  Normal  Starting                 9m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m55s (x8 over 9m55s)  kubelet          Node addons-457090 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m55s (x8 over 9m55s)  kubelet          Node addons-457090 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m55s (x8 over 9m55s)  kubelet          Node addons-457090 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m47s                  kubelet          Node addons-457090 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m47s                  kubelet          Node addons-457090 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m47s                  kubelet          Node addons-457090 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m35s                  node-controller  Node addons-457090 event: Registered Node addons-457090 in Controller
	  Normal  NodeReady                8m59s                  kubelet          Node addons-457090 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001061] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=00000000e2172674
	[  +0.001121] FS-Cache: O-key=[8] 'd5425c0100000000'
	[  +0.000718] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001013] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000c61f21fc
	[  +0.001056] FS-Cache: N-key=[8] 'd5425c0100000000'
	[  +2.201464] FS-Cache: Duplicate cookie detected
	[  +0.000788] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001073] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=00000000160cdc65
	[  +0.001055] FS-Cache: O-key=[8] 'd4425c0100000000'
	[  +0.000702] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000925] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000ec0b8a4d
	[  +0.001126] FS-Cache: N-key=[8] 'd4425c0100000000'
	[  +0.396125] FS-Cache: Duplicate cookie detected
	[  +0.000758] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.000978] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=000000003014340c
	[  +0.001111] FS-Cache: O-key=[8] 'da425c0100000000'
	[  +0.000783] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000926] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000053a5fe1
	[  +0.001072] FS-Cache: N-key=[8] 'da425c0100000000'
	[Apr29 13:39] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[ +48.347025] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.006466] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.002188] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.173561] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	
	
	==> etcd [3401a97b7bbcbaa462a4892d9c41177640f685854d2d85fa521ba7f2b5e6836c] <==
	{"level":"info","ts":"2024-04-29T14:07:31.25515Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T14:07:31.266252Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-04-29T14:07:31.276718Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T14:07:31.276818Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T14:07:31.276886Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T14:07:31.276972Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T14:07:31.277025Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-04-29T14:07:53.088986Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.638032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T14:07:53.096321Z","caller":"traceutil/trace.go:171","msg":"trace[184047524] range","detail":"{range_begin:/registry/controllers/kube-system/registry; range_end:; response_count:0; response_revision:375; }","duration":"146.972854ms","start":"2024-04-29T14:07:52.949324Z","end":"2024-04-29T14:07:53.096297Z","steps":["trace[184047524] 'agreement among raft nodes before linearized reading'  (duration: 85.702981ms)","trace[184047524] 'range keys from in-memory index tree'  (duration: 53.922447ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T14:07:53.097398Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.830252ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T14:07:53.101108Z","caller":"traceutil/trace.go:171","msg":"trace[104445180] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:377; }","duration":"151.543994ms","start":"2024-04-29T14:07:52.949549Z","end":"2024-04-29T14:07:53.101093Z","steps":["trace[104445180] 'agreement among raft nodes before linearized reading'  (duration: 147.764685ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T14:07:53.101157Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.628309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T14:07:53.123323Z","caller":"traceutil/trace.go:171","msg":"trace[133450843] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:377; }","duration":"173.793689ms","start":"2024-04-29T14:07:52.949414Z","end":"2024-04-29T14:07:53.123208Z","steps":["trace[133450843] 'agreement among raft nodes before linearized reading'  (duration: 151.591386ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T14:07:53.602269Z","caller":"traceutil/trace.go:171","msg":"trace[1328106099] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"151.363983ms","start":"2024-04-29T14:07:53.450886Z","end":"2024-04-29T14:07:53.60225Z","steps":["trace[1328106099] 'process raft request'  (duration: 92.474739ms)","trace[1328106099] 'compare'  (duration: 57.453774ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T14:07:53.602495Z","caller":"traceutil/trace.go:171","msg":"trace[410746466] linearizableReadLoop","detail":"{readStateIndex:411; appliedIndex:410; }","duration":"151.31307ms","start":"2024-04-29T14:07:53.451173Z","end":"2024-04-29T14:07:53.602486Z","steps":["trace[410746466] 'read index received'  (duration: 91.752142ms)","trace[410746466] 'applied index is now lower than readState.Index'  (duration: 59.560001ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T14:07:53.60262Z","caller":"traceutil/trace.go:171","msg":"trace[674151775] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"125.974623ms","start":"2024-04-29T14:07:53.476639Z","end":"2024-04-29T14:07:53.602613Z","steps":["trace[674151775] 'process raft request'  (duration: 124.690052ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T14:07:53.602782Z","caller":"traceutil/trace.go:171","msg":"trace[789335460] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"109.297005ms","start":"2024-04-29T14:07:53.493479Z","end":"2024-04-29T14:07:53.602776Z","steps":["trace[789335460] 'process raft request'  (duration: 107.903101ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T14:07:53.603095Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.906478ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/\" range_end:\"/registry/serviceaccounts/kube-system0\" ","response":"range_response_count:40 size:9600"}
	{"level":"warn","ts":"2024-04-29T14:07:53.623635Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.504039ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/default/\" range_end:\"/registry/limitranges/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T14:07:53.628829Z","caller":"traceutil/trace.go:171","msg":"trace[1394273420] range","detail":"{range_begin:/registry/limitranges/default/; range_end:/registry/limitranges/default0; response_count:0; response_revision:403; }","duration":"135.702096ms","start":"2024-04-29T14:07:53.493107Z","end":"2024-04-29T14:07:53.628809Z","steps":["trace[1394273420] 'agreement among raft nodes before linearized reading'  (duration: 130.462702ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T14:07:53.629562Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.865935ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/yakd-dashboard/\" range_end:\"/registry/resourcequotas/yakd-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T14:07:53.629678Z","caller":"traceutil/trace.go:171","msg":"trace[1752055843] range","detail":"{range_begin:/registry/resourcequotas/yakd-dashboard/; range_end:/registry/resourcequotas/yakd-dashboard0; response_count:0; response_revision:403; }","duration":"135.987264ms","start":"2024-04-29T14:07:53.493681Z","end":"2024-04-29T14:07:53.629668Z","steps":["trace[1752055843] 'agreement among raft nodes before linearized reading'  (duration: 135.85316ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T14:07:53.629868Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.642571ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/local-path-storage/\" range_end:\"/registry/resourcequotas/local-path-storage0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T14:07:53.629953Z","caller":"traceutil/trace.go:171","msg":"trace[1039367830] range","detail":"{range_begin:/registry/resourcequotas/local-path-storage/; range_end:/registry/resourcequotas/local-path-storage0; response_count:0; response_revision:403; }","duration":"136.728527ms","start":"2024-04-29T14:07:53.493216Z","end":"2024-04-29T14:07:53.629945Z","steps":["trace[1039367830] 'agreement among raft nodes before linearized reading'  (duration: 136.629656ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T14:07:53.631635Z","caller":"traceutil/trace.go:171","msg":"trace[261754142] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/; range_end:/registry/serviceaccounts/kube-system0; response_count:40; response_revision:403; }","duration":"155.802315ms","start":"2024-04-29T14:07:53.451168Z","end":"2024-04-29T14:07:53.60697Z","steps":["trace[261754142] 'agreement among raft nodes before linearized reading'  (duration: 151.743722ms)"],"step_count":1}
	
	
	==> gcp-auth [01984734ff3aed05b66196108f486b91a04021d4f9ebe8252f25c35963b06009] <==
	2024/04/29 14:09:13 GCP Auth Webhook started!
	2024/04/29 14:10:40 Ready to marshal response ...
	2024/04/29 14:10:40 Ready to write response ...
	2024/04/29 14:10:41 Ready to marshal response ...
	2024/04/29 14:10:41 Ready to write response ...
	2024/04/29 14:10:57 Ready to marshal response ...
	2024/04/29 14:10:57 Ready to write response ...
	2024/04/29 14:10:57 Ready to marshal response ...
	2024/04/29 14:10:57 Ready to write response ...
	2024/04/29 14:11:03 Ready to marshal response ...
	2024/04/29 14:11:03 Ready to write response ...
	2024/04/29 14:11:07 Ready to marshal response ...
	2024/04/29 14:11:07 Ready to write response ...
	2024/04/29 14:11:27 Ready to marshal response ...
	2024/04/29 14:11:27 Ready to write response ...
	2024/04/29 14:11:27 Ready to marshal response ...
	2024/04/29 14:11:27 Ready to write response ...
	2024/04/29 14:11:27 Ready to marshal response ...
	2024/04/29 14:11:27 Ready to write response ...
	2024/04/29 14:12:02 Ready to marshal response ...
	2024/04/29 14:12:02 Ready to write response ...
	2024/04/29 14:14:22 Ready to marshal response ...
	2024/04/29 14:14:22 Ready to write response ...
	
	
	==> kernel <==
	 14:17:24 up  9:59,  0 users,  load average: 0.20, 0.86, 1.98
	Linux addons-457090 5.15.0-1058-aws #64~20.04.1-Ubuntu SMP Tue Apr 9 11:11:55 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [0229dc76b7d0d91d8939d64ff55b5365deb30b5f7146a4117e113b579a65c3cc] <==
	I0429 14:15:15.991494       1 main.go:227] handling current node
	I0429 14:15:26.006759       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:15:26.006791       1 main.go:227] handling current node
	I0429 14:15:36.011306       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:15:36.011335       1 main.go:227] handling current node
	I0429 14:15:46.020613       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:15:46.020644       1 main.go:227] handling current node
	I0429 14:15:56.025605       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:15:56.025632       1 main.go:227] handling current node
	I0429 14:16:06.031429       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:16:06.031462       1 main.go:227] handling current node
	I0429 14:16:16.036004       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:16:16.036033       1 main.go:227] handling current node
	I0429 14:16:26.047586       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:16:26.047613       1 main.go:227] handling current node
	I0429 14:16:36.060171       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:16:36.060199       1 main.go:227] handling current node
	I0429 14:16:46.072401       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:16:46.072507       1 main.go:227] handling current node
	I0429 14:16:56.076724       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:16:56.076830       1 main.go:227] handling current node
	I0429 14:17:06.089095       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:17:06.089218       1 main.go:227] handling current node
	I0429 14:17:16.101111       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:17:16.101137       1 main.go:227] handling current node
	
	
	==> kube-apiserver [8d4c3f49a1645155cde7392cd6c877625f43a88acd5b67e3307e8c2bc40a0f93] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 14:09:54.312782       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0429 14:10:54.774077       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0429 14:11:08.435800       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0429 14:11:08.448826       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0429 14:11:08.459091       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0429 14:11:14.366255       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	I0429 14:11:20.921469       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 14:11:20.921604       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 14:11:21.015696       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 14:11:21.015847       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 14:11:21.040179       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 14:11:21.040301       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 14:11:21.077320       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 14:11:21.077448       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0429 14:11:22.040555       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0429 14:11:22.078088       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0429 14:11:22.091693       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0429 14:11:23.461019       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0429 14:11:27.643482       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.18.186"}
	I0429 14:11:56.288499       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0429 14:11:57.342692       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0429 14:12:01.831412       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0429 14:12:02.140477       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.142.127"}
	I0429 14:14:22.692277       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.152.50"}
	
	
	==> kube-controller-manager [99e1db8ae8156b28336afe942b842d512ac4d213aee0a2dfa2bc6698055b8034] <==
	E0429 14:15:36.019745       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 14:15:36.260896       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 14:15:36.260933       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0429 14:15:45.959199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="171.331µs"
	W0429 14:15:47.874141       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 14:15:47.874179       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 14:15:48.614014       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 14:15:48.614054       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0429 14:16:01.244982       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="170.222µs"
	W0429 14:16:10.611129       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 14:16:10.611166       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 14:16:31.155476       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 14:16:31.155517       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 14:16:32.807684       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 14:16:32.807724       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 14:16:46.425113       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 14:16:46.425156       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 14:16:46.733601       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 14:16:46.733640       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 14:17:12.392572       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 14:17:12.392607       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 14:17:18.735211       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 14:17:18.735253       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0429 14:17:19.124199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="69.825µs"
	I0429 14:17:22.987083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="6.178µs"
	
	
	==> kube-proxy [99b8b7a1eee2f1c79ba89af144f13e95d10046013152427c76c4044da853ec67] <==
	I0429 14:07:55.453455       1 server_linux.go:69] "Using iptables proxy"
	I0429 14:07:55.577403       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0429 14:07:55.685651       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0429 14:07:55.685873       1 server_linux.go:165] "Using iptables Proxier"
	I0429 14:07:55.689174       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0429 14:07:55.689205       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0429 14:07:55.689227       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 14:07:55.689419       1 server.go:872] "Version info" version="v1.30.0"
	I0429 14:07:55.689441       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 14:07:55.690642       1 config.go:192] "Starting service config controller"
	I0429 14:07:55.690662       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 14:07:55.690689       1 config.go:101] "Starting endpoint slice config controller"
	I0429 14:07:55.690701       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 14:07:55.691147       1 config.go:319] "Starting node config controller"
	I0429 14:07:55.691164       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 14:07:55.793700       1 shared_informer.go:320] Caches are synced for service config
	I0429 14:07:55.793769       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 14:07:55.792340       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dab35c23ea4065b0a51128ef70a1fda74a2447b21c8a13195c5b14133144dc93] <==
	I0429 14:07:33.280339       1 serving.go:380] Generated self-signed cert in-memory
	W0429 14:07:35.867108       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0429 14:07:35.867151       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 14:07:35.867161       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 14:07:35.867168       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 14:07:35.902939       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 14:07:35.908733       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 14:07:35.913009       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 14:07:35.913536       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 14:07:35.920729       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 14:07:35.913556       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0429 14:07:35.925058       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 14:07:35.925167       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0429 14:07:37.021927       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 14:15:45 addons-457090 kubelet[1499]: I0429 14:15:45.931280    1499 scope.go:117] "RemoveContainer" containerID="9ae16c688f0aa56c051a940f94920d89cf8d1087c63b5872a46f8fcc999e75e9"
	Apr 29 14:15:45 addons-457090 kubelet[1499]: I0429 14:15:45.931582    1499 scope.go:117] "RemoveContainer" containerID="2b1dc8f9e8800738aec5ad012624a19ea225c7bb5fcb0d4f97e8bc365c5a8368"
	Apr 29 14:15:45 addons-457090 kubelet[1499]: E0429 14:15:45.931850    1499 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-z6j5z_default(7e0954b1-1197-4cd7-85c3-d989d0d8799d)\"" pod="default/hello-world-app-86c47465fc-z6j5z" podUID="7e0954b1-1197-4cd7-85c3-d989d0d8799d"
	Apr 29 14:16:01 addons-457090 kubelet[1499]: I0429 14:16:01.222594    1499 scope.go:117] "RemoveContainer" containerID="2b1dc8f9e8800738aec5ad012624a19ea225c7bb5fcb0d4f97e8bc365c5a8368"
	Apr 29 14:16:01 addons-457090 kubelet[1499]: E0429 14:16:01.222881    1499 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-z6j5z_default(7e0954b1-1197-4cd7-85c3-d989d0d8799d)\"" pod="default/hello-world-app-86c47465fc-z6j5z" podUID="7e0954b1-1197-4cd7-85c3-d989d0d8799d"
	Apr 29 14:16:14 addons-457090 kubelet[1499]: I0429 14:16:14.223055    1499 scope.go:117] "RemoveContainer" containerID="2b1dc8f9e8800738aec5ad012624a19ea225c7bb5fcb0d4f97e8bc365c5a8368"
	Apr 29 14:16:14 addons-457090 kubelet[1499]: E0429 14:16:14.223348    1499 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-z6j5z_default(7e0954b1-1197-4cd7-85c3-d989d0d8799d)\"" pod="default/hello-world-app-86c47465fc-z6j5z" podUID="7e0954b1-1197-4cd7-85c3-d989d0d8799d"
	Apr 29 14:16:26 addons-457090 kubelet[1499]: I0429 14:16:26.222243    1499 scope.go:117] "RemoveContainer" containerID="2b1dc8f9e8800738aec5ad012624a19ea225c7bb5fcb0d4f97e8bc365c5a8368"
	Apr 29 14:16:26 addons-457090 kubelet[1499]: E0429 14:16:26.222539    1499 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-z6j5z_default(7e0954b1-1197-4cd7-85c3-d989d0d8799d)\"" pod="default/hello-world-app-86c47465fc-z6j5z" podUID="7e0954b1-1197-4cd7-85c3-d989d0d8799d"
	Apr 29 14:16:40 addons-457090 kubelet[1499]: I0429 14:16:40.222563    1499 scope.go:117] "RemoveContainer" containerID="2b1dc8f9e8800738aec5ad012624a19ea225c7bb5fcb0d4f97e8bc365c5a8368"
	Apr 29 14:16:40 addons-457090 kubelet[1499]: E0429 14:16:40.222881    1499 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-z6j5z_default(7e0954b1-1197-4cd7-85c3-d989d0d8799d)\"" pod="default/hello-world-app-86c47465fc-z6j5z" podUID="7e0954b1-1197-4cd7-85c3-d989d0d8799d"
	Apr 29 14:16:52 addons-457090 kubelet[1499]: I0429 14:16:52.222710    1499 scope.go:117] "RemoveContainer" containerID="2b1dc8f9e8800738aec5ad012624a19ea225c7bb5fcb0d4f97e8bc365c5a8368"
	Apr 29 14:16:52 addons-457090 kubelet[1499]: E0429 14:16:52.223015    1499 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-z6j5z_default(7e0954b1-1197-4cd7-85c3-d989d0d8799d)\"" pod="default/hello-world-app-86c47465fc-z6j5z" podUID="7e0954b1-1197-4cd7-85c3-d989d0d8799d"
	Apr 29 14:17:03 addons-457090 kubelet[1499]: I0429 14:17:03.222807    1499 scope.go:117] "RemoveContainer" containerID="2b1dc8f9e8800738aec5ad012624a19ea225c7bb5fcb0d4f97e8bc365c5a8368"
	Apr 29 14:17:03 addons-457090 kubelet[1499]: E0429 14:17:03.223343    1499 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-z6j5z_default(7e0954b1-1197-4cd7-85c3-d989d0d8799d)\"" pod="default/hello-world-app-86c47465fc-z6j5z" podUID="7e0954b1-1197-4cd7-85c3-d989d0d8799d"
	Apr 29 14:17:18 addons-457090 kubelet[1499]: I0429 14:17:18.222780    1499 scope.go:117] "RemoveContainer" containerID="2b1dc8f9e8800738aec5ad012624a19ea225c7bb5fcb0d4f97e8bc365c5a8368"
	Apr 29 14:17:19 addons-457090 kubelet[1499]: I0429 14:17:19.112374    1499 scope.go:117] "RemoveContainer" containerID="2b1dc8f9e8800738aec5ad012624a19ea225c7bb5fcb0d4f97e8bc365c5a8368"
	Apr 29 14:17:19 addons-457090 kubelet[1499]: I0429 14:17:19.112717    1499 scope.go:117] "RemoveContainer" containerID="95e536a505a8357839aa488703a4083597043b9628ba26b05ba846f6d2989916"
	Apr 29 14:17:19 addons-457090 kubelet[1499]: E0429 14:17:19.112985    1499 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-z6j5z_default(7e0954b1-1197-4cd7-85c3-d989d0d8799d)\"" pod="default/hello-world-app-86c47465fc-z6j5z" podUID="7e0954b1-1197-4cd7-85c3-d989d0d8799d"
	Apr 29 14:17:24 addons-457090 kubelet[1499]: I0429 14:17:24.390800    1499 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/aedce136-b59d-41a1-83ba-037b4f9e9302-tmp-dir\") pod \"aedce136-b59d-41a1-83ba-037b4f9e9302\" (UID: \"aedce136-b59d-41a1-83ba-037b4f9e9302\") "
	Apr 29 14:17:24 addons-457090 kubelet[1499]: I0429 14:17:24.390862    1499 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-687hs\" (UniqueName: \"kubernetes.io/projected/aedce136-b59d-41a1-83ba-037b4f9e9302-kube-api-access-687hs\") pod \"aedce136-b59d-41a1-83ba-037b4f9e9302\" (UID: \"aedce136-b59d-41a1-83ba-037b4f9e9302\") "
	Apr 29 14:17:24 addons-457090 kubelet[1499]: I0429 14:17:24.391563    1499 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aedce136-b59d-41a1-83ba-037b4f9e9302-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "aedce136-b59d-41a1-83ba-037b4f9e9302" (UID: "aedce136-b59d-41a1-83ba-037b4f9e9302"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Apr 29 14:17:24 addons-457090 kubelet[1499]: I0429 14:17:24.398731    1499 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aedce136-b59d-41a1-83ba-037b4f9e9302-kube-api-access-687hs" (OuterVolumeSpecName: "kube-api-access-687hs") pod "aedce136-b59d-41a1-83ba-037b4f9e9302" (UID: "aedce136-b59d-41a1-83ba-037b4f9e9302"). InnerVolumeSpecName "kube-api-access-687hs". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 29 14:17:24 addons-457090 kubelet[1499]: I0429 14:17:24.491108    1499 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/aedce136-b59d-41a1-83ba-037b4f9e9302-tmp-dir\") on node \"addons-457090\" DevicePath \"\""
	Apr 29 14:17:24 addons-457090 kubelet[1499]: I0429 14:17:24.491146    1499 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-687hs\" (UniqueName: \"kubernetes.io/projected/aedce136-b59d-41a1-83ba-037b4f9e9302-kube-api-access-687hs\") on node \"addons-457090\" DevicePath \"\""
	
	
	==> storage-provisioner [19a13a7429ba0c21152b3811e3da57e53205b758fcacfeaccaab942065bd5b8b] <==
	I0429 14:08:27.081886       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 14:08:27.097405       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 14:08:27.097609       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 14:08:27.110477       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 14:08:27.111553       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-457090_54ebab16-af77-4994-b425-0fe6282ae3f2!
	I0429 14:08:27.111766       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9d103883-fb0a-4686-8739-74a01b7285ce", APIVersion:"v1", ResourceVersion:"946", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-457090_54ebab16-af77-4994-b425-0fe6282ae3f2 became leader
	I0429 14:08:27.211716       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-457090_54ebab16-af77-4994-b425-0fe6282ae3f2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-457090 -n addons-457090
helpers_test.go:261: (dbg) Run:  kubectl --context addons-457090 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (347.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (127.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-581657 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0429 14:31:37.739750 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-581657 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m2.894292234s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:589: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-581657       NotReady   control-plane   10m     v1.30.0
	ha-581657-m02   Ready      control-plane   9m57s   v1.30.0
	ha-581657-m04   Ready      <none>          7m57s   v1.30.0

                                                
                                                
-- /stdout --
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-581657
helpers_test.go:235: (dbg) docker inspect ha-581657:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c01b3fbb28813eca464cc45cefddcc7f5af1da2db2412a06939d424a5b6a6b34",
	        "Created": "2024-04-29T14:22:23.781421825Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1960789,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-29T14:31:12.70527113Z",
	            "FinishedAt": "2024-04-29T14:31:11.77127705Z"
	        },
	        "Image": "sha256:c9315e0f61546d7b9630cf89252fa7f614fc966830e816cca5333df5c944376f",
	        "ResolvConfPath": "/var/lib/docker/containers/c01b3fbb28813eca464cc45cefddcc7f5af1da2db2412a06939d424a5b6a6b34/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c01b3fbb28813eca464cc45cefddcc7f5af1da2db2412a06939d424a5b6a6b34/hostname",
	        "HostsPath": "/var/lib/docker/containers/c01b3fbb28813eca464cc45cefddcc7f5af1da2db2412a06939d424a5b6a6b34/hosts",
	        "LogPath": "/var/lib/docker/containers/c01b3fbb28813eca464cc45cefddcc7f5af1da2db2412a06939d424a5b6a6b34/c01b3fbb28813eca464cc45cefddcc7f5af1da2db2412a06939d424a5b6a6b34-json.log",
	        "Name": "/ha-581657",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-581657:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-581657",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6370131cde2fd5f7c679c670360677c772836f1831aed151770e5c874ad3ecc0-init/diff:/var/lib/docker/overlay2/f080d6ed1efba2dbfce916f4260b407bf4d9204079d2708eb1c14f6847e489ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6370131cde2fd5f7c679c670360677c772836f1831aed151770e5c874ad3ecc0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6370131cde2fd5f7c679c670360677c772836f1831aed151770e5c874ad3ecc0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6370131cde2fd5f7c679c670360677c772836f1831aed151770e5c874ad3ecc0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-581657",
	                "Source": "/var/lib/docker/volumes/ha-581657/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-581657",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-581657",
	                "name.minikube.sigs.k8s.io": "ha-581657",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8af222938c53ca8d095415400dcf1c0ff237627aeaca497eb46cd9fc43f07799",
	            "SandboxKey": "/var/run/docker/netns/8af222938c53",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35102"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35101"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35098"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35099"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-581657": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "ee8cb9302668931a8fc58d2a1945f7d2ccaac8c383701bfdd763d2eecd9aabbc",
	                    "EndpointID": "5c90ddccbe3952a9450d3c1057c4a73301aa2ff4ca63942397c27e46e6fef6dd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-581657",
	                        "c01b3fbb2881"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-581657 -n ha-581657
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ha-581657 logs -n 25: (2.023624178s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-581657 cp ha-581657-m03:/home/docker/cp-test.txt                              | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:26 UTC | 29 Apr 24 14:26 UTC |
	|         | ha-581657-m04:/home/docker/cp-test_ha-581657-m03_ha-581657-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-581657 ssh -n                                                                 | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:26 UTC | 29 Apr 24 14:26 UTC |
	|         | ha-581657-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-581657 ssh -n ha-581657-m04 sudo cat                                          | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:26 UTC | 29 Apr 24 14:26 UTC |
	|         | /home/docker/cp-test_ha-581657-m03_ha-581657-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-581657 cp testdata/cp-test.txt                                                | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:26 UTC | 29 Apr 24 14:26 UTC |
	|         | ha-581657-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-581657 ssh -n                                                                 | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:26 UTC | 29 Apr 24 14:26 UTC |
	|         | ha-581657-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-581657 cp ha-581657-m04:/home/docker/cp-test.txt                              | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:26 UTC | 29 Apr 24 14:26 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3176475453/001/cp-test_ha-581657-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-581657 ssh -n                                                                 | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:26 UTC | 29 Apr 24 14:26 UTC |
	|         | ha-581657-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-581657 cp ha-581657-m04:/home/docker/cp-test.txt                              | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:26 UTC | 29 Apr 24 14:26 UTC |
	|         | ha-581657:/home/docker/cp-test_ha-581657-m04_ha-581657.txt                       |           |         |         |                     |                     |
	| ssh     | ha-581657 ssh -n                                                                 | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:26 UTC | 29 Apr 24 14:26 UTC |
	|         | ha-581657-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-581657 ssh -n ha-581657 sudo cat                                              | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:26 UTC | 29 Apr 24 14:26 UTC |
	|         | /home/docker/cp-test_ha-581657-m04_ha-581657.txt                                 |           |         |         |                     |                     |
	| cp      | ha-581657 cp ha-581657-m04:/home/docker/cp-test.txt                              | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:26 UTC | 29 Apr 24 14:26 UTC |
	|         | ha-581657-m02:/home/docker/cp-test_ha-581657-m04_ha-581657-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-581657 ssh -n                                                                 | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:26 UTC | 29 Apr 24 14:26 UTC |
	|         | ha-581657-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-581657 ssh -n ha-581657-m02 sudo cat                                          | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:26 UTC | 29 Apr 24 14:26 UTC |
	|         | /home/docker/cp-test_ha-581657-m04_ha-581657-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-581657 cp ha-581657-m04:/home/docker/cp-test.txt                              | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:26 UTC | 29 Apr 24 14:26 UTC |
	|         | ha-581657-m03:/home/docker/cp-test_ha-581657-m04_ha-581657-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-581657 ssh -n                                                                 | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:26 UTC | 29 Apr 24 14:26 UTC |
	|         | ha-581657-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-581657 ssh -n ha-581657-m03 sudo cat                                          | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:26 UTC | 29 Apr 24 14:26 UTC |
	|         | /home/docker/cp-test_ha-581657-m04_ha-581657-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-581657 node stop m02 -v=7                                                     | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:26 UTC | 29 Apr 24 14:26 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-581657 node start m02 -v=7                                                    | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:26 UTC | 29 Apr 24 14:26 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-581657 -v=7                                                           | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-581657 -v=7                                                                | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:26 UTC | 29 Apr 24 14:27 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-581657 --wait=true -v=7                                                    | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:27 UTC | 29 Apr 24 14:30 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-581657                                                                | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:30 UTC |                     |
	| node    | ha-581657 node delete m03 -v=7                                                   | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:30 UTC | 29 Apr 24 14:30 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-581657 stop -v=7                                                              | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:30 UTC | 29 Apr 24 14:31 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-581657 --wait=true                                                         | ha-581657 | jenkins | v1.33.0 | 29 Apr 24 14:31 UTC | 29 Apr 24 14:33 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 14:31:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 14:31:12.231934 1960604 out.go:291] Setting OutFile to fd 1 ...
	I0429 14:31:12.232157 1960604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:31:12.232184 1960604 out.go:304] Setting ErrFile to fd 2...
	I0429 14:31:12.232203 1960604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:31:12.232481 1960604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-1897267/.minikube/bin
	I0429 14:31:12.232928 1960604 out.go:298] Setting JSON to false
	I0429 14:31:12.234414 1960604 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":36817,"bootTime":1714364256,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0429 14:31:12.234520 1960604 start.go:139] virtualization:  
	I0429 14:31:12.237738 1960604 out.go:177] * [ha-581657] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0429 14:31:12.240895 1960604 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 14:31:12.243051 1960604 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 14:31:12.240974 1960604 notify.go:220] Checking for updates...
	I0429 14:31:12.245138 1960604 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18771-1897267/kubeconfig
	I0429 14:31:12.247220 1960604 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-1897267/.minikube
	I0429 14:31:12.249486 1960604 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0429 14:31:12.251678 1960604 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 14:31:12.254366 1960604 config.go:182] Loaded profile config "ha-581657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:31:12.254871 1960604 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 14:31:12.276226 1960604 docker.go:122] docker version: linux-26.1.0:Docker Engine - Community
	I0429 14:31:12.276337 1960604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:31:12.338636 1960604 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:42 SystemTime:2024-04-29 14:31:12.329288067 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:31:12.338743 1960604 docker.go:295] overlay module found
	I0429 14:31:12.341620 1960604 out.go:177] * Using the docker driver based on existing profile
	I0429 14:31:12.343630 1960604 start.go:297] selected driver: docker
	I0429 14:31:12.343647 1960604 start.go:901] validating driver "docker" against &{Name:ha-581657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-581657 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kub
evirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 14:31:12.343785 1960604 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 14:31:12.343890 1960604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:31:12.396372 1960604 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:42 SystemTime:2024-04-29 14:31:12.387589712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:31:12.396848 1960604 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 14:31:12.396903 1960604 cni.go:84] Creating CNI manager for ""
	I0429 14:31:12.396917 1960604 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0429 14:31:12.396968 1960604 start.go:340] cluster config:
	{Name:ha-581657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-581657 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device
-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 14:31:12.399481 1960604 out.go:177] * Starting "ha-581657" primary control-plane node in "ha-581657" cluster
	I0429 14:31:12.401577 1960604 cache.go:121] Beginning downloading kic base image for docker with crio
	I0429 14:31:12.404051 1960604 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 14:31:12.406226 1960604 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 14:31:12.406274 1960604 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4
	I0429 14:31:12.406286 1960604 cache.go:56] Caching tarball of preloaded images
	I0429 14:31:12.406323 1960604 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 14:31:12.406376 1960604 preload.go:173] Found /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0429 14:31:12.406387 1960604 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 14:31:12.406523 1960604 profile.go:143] Saving config to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/config.json ...
	I0429 14:31:12.420309 1960604 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 14:31:12.420331 1960604 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 14:31:12.420362 1960604 cache.go:194] Successfully downloaded all kic artifacts
	I0429 14:31:12.420392 1960604 start.go:360] acquireMachinesLock for ha-581657: {Name:mkc633d149f24147a08090eaad0ee52dca6152f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 14:31:12.420477 1960604 start.go:364] duration metric: took 52.644µs to acquireMachinesLock for "ha-581657"
	I0429 14:31:12.420499 1960604 start.go:96] Skipping create...Using existing machine configuration
	I0429 14:31:12.420509 1960604 fix.go:54] fixHost starting: 
	I0429 14:31:12.420818 1960604 cli_runner.go:164] Run: docker container inspect ha-581657 --format={{.State.Status}}
	I0429 14:31:12.436565 1960604 fix.go:112] recreateIfNeeded on ha-581657: state=Stopped err=<nil>
	W0429 14:31:12.436597 1960604 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 14:31:12.441655 1960604 out.go:177] * Restarting existing docker container for "ha-581657" ...
	I0429 14:31:12.444215 1960604 cli_runner.go:164] Run: docker start ha-581657
	I0429 14:31:12.713917 1960604 cli_runner.go:164] Run: docker container inspect ha-581657 --format={{.State.Status}}
	I0429 14:31:12.731837 1960604 kic.go:430] container "ha-581657" state is running.
	I0429 14:31:12.732328 1960604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-581657
	I0429 14:31:12.755723 1960604 profile.go:143] Saving config to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/config.json ...
	I0429 14:31:12.756089 1960604 machine.go:94] provisionDockerMachine start ...
	I0429 14:31:12.756211 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657
	I0429 14:31:12.782497 1960604 main.go:141] libmachine: Using SSH client type: native
	I0429 14:31:12.782875 1960604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35102 <nil> <nil>}
	I0429 14:31:12.782888 1960604 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 14:31:12.783737 1960604 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0429 14:31:15.908020 1960604 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-581657
	
	I0429 14:31:15.908104 1960604 ubuntu.go:169] provisioning hostname "ha-581657"
	I0429 14:31:15.908220 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657
	I0429 14:31:15.924604 1960604 main.go:141] libmachine: Using SSH client type: native
	I0429 14:31:15.924975 1960604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35102 <nil> <nil>}
	I0429 14:31:15.924995 1960604 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-581657 && echo "ha-581657" | sudo tee /etc/hostname
	I0429 14:31:16.061043 1960604 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-581657
	
	I0429 14:31:16.061125 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657
	I0429 14:31:16.077749 1960604 main.go:141] libmachine: Using SSH client type: native
	I0429 14:31:16.077996 1960604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35102 <nil> <nil>}
	I0429 14:31:16.078019 1960604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-581657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-581657/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-581657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 14:31:16.200913 1960604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 14:31:16.200940 1960604 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18771-1897267/.minikube CaCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18771-1897267/.minikube}
	I0429 14:31:16.201012 1960604 ubuntu.go:177] setting up certificates
	I0429 14:31:16.201030 1960604 provision.go:84] configureAuth start
	I0429 14:31:16.201097 1960604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-581657
	I0429 14:31:16.217615 1960604 provision.go:143] copyHostCerts
	I0429 14:31:16.217661 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem
	I0429 14:31:16.217698 1960604 exec_runner.go:144] found /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem, removing ...
	I0429 14:31:16.217709 1960604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem
	I0429 14:31:16.217790 1960604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem (1078 bytes)
	I0429 14:31:16.217895 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem
	I0429 14:31:16.217918 1960604 exec_runner.go:144] found /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem, removing ...
	I0429 14:31:16.217923 1960604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem
	I0429 14:31:16.217953 1960604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem (1123 bytes)
	I0429 14:31:16.218003 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem
	I0429 14:31:16.218025 1960604 exec_runner.go:144] found /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem, removing ...
	I0429 14:31:16.218030 1960604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem
	I0429 14:31:16.218064 1960604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem (1679 bytes)
	I0429 14:31:16.218120 1960604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem org=jenkins.ha-581657 san=[127.0.0.1 192.168.49.2 ha-581657 localhost minikube]
	I0429 14:31:16.361832 1960604 provision.go:177] copyRemoteCerts
	I0429 14:31:16.361908 1960604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 14:31:16.361954 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657
	I0429 14:31:16.378386 1960604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35102 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/ha-581657/id_rsa Username:docker}
	I0429 14:31:16.473432 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 14:31:16.473505 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 14:31:16.497216 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 14:31:16.497284 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0429 14:31:16.521787 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 14:31:16.521846 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 14:31:16.545742 1960604 provision.go:87] duration metric: took 344.682789ms to configureAuth
	I0429 14:31:16.545771 1960604 ubuntu.go:193] setting minikube options for container-runtime
	I0429 14:31:16.546033 1960604 config.go:182] Loaded profile config "ha-581657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:31:16.546150 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657
	I0429 14:31:16.562158 1960604 main.go:141] libmachine: Using SSH client type: native
	I0429 14:31:16.562406 1960604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35102 <nil> <nil>}
	I0429 14:31:16.562425 1960604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 14:31:16.950648 1960604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 14:31:16.950673 1960604 machine.go:97] duration metric: took 4.194570239s to provisionDockerMachine
	I0429 14:31:16.950700 1960604 start.go:293] postStartSetup for "ha-581657" (driver="docker")
	I0429 14:31:16.950712 1960604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 14:31:16.950774 1960604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 14:31:16.950822 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657
	I0429 14:31:16.968914 1960604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35102 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/ha-581657/id_rsa Username:docker}
	I0429 14:31:17.065578 1960604 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 14:31:17.068855 1960604 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0429 14:31:17.068890 1960604 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0429 14:31:17.068925 1960604 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0429 14:31:17.068939 1960604 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0429 14:31:17.068949 1960604 filesync.go:126] Scanning /home/jenkins/minikube-integration/18771-1897267/.minikube/addons for local assets ...
	I0429 14:31:17.069013 1960604 filesync.go:126] Scanning /home/jenkins/minikube-integration/18771-1897267/.minikube/files for local assets ...
	I0429 14:31:17.069098 1960604 filesync.go:149] local asset: /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem -> 19026842.pem in /etc/ssl/certs
	I0429 14:31:17.069110 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem -> /etc/ssl/certs/19026842.pem
	I0429 14:31:17.069218 1960604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 14:31:17.077754 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem --> /etc/ssl/certs/19026842.pem (1708 bytes)
	I0429 14:31:17.101799 1960604 start.go:296] duration metric: took 151.083517ms for postStartSetup
	I0429 14:31:17.101880 1960604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 14:31:17.101919 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657
	I0429 14:31:17.118045 1960604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35102 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/ha-581657/id_rsa Username:docker}
	I0429 14:31:17.205582 1960604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 14:31:17.210026 1960604 fix.go:56] duration metric: took 4.789514858s for fixHost
	I0429 14:31:17.210055 1960604 start.go:83] releasing machines lock for "ha-581657", held for 4.789567897s
	I0429 14:31:17.210149 1960604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-581657
	I0429 14:31:17.225947 1960604 ssh_runner.go:195] Run: cat /version.json
	I0429 14:31:17.225988 1960604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 14:31:17.226007 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657
	I0429 14:31:17.226040 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657
	I0429 14:31:17.244551 1960604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35102 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/ha-581657/id_rsa Username:docker}
	I0429 14:31:17.246852 1960604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35102 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/ha-581657/id_rsa Username:docker}
	I0429 14:31:17.328344 1960604 ssh_runner.go:195] Run: systemctl --version
	I0429 14:31:17.441061 1960604 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 14:31:17.585306 1960604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 14:31:17.589565 1960604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 14:31:17.598025 1960604 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0429 14:31:17.598096 1960604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 14:31:17.606787 1960604 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 14:31:17.606813 1960604 start.go:494] detecting cgroup driver to use...
	I0429 14:31:17.606844 1960604 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0429 14:31:17.606901 1960604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 14:31:17.618437 1960604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 14:31:17.629602 1960604 docker.go:217] disabling cri-docker service (if available) ...
	I0429 14:31:17.629687 1960604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 14:31:17.642913 1960604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 14:31:17.654481 1960604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 14:31:17.735360 1960604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 14:31:17.814600 1960604 docker.go:233] disabling docker service ...
	I0429 14:31:17.814723 1960604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 14:31:17.827378 1960604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 14:31:17.839079 1960604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 14:31:17.935517 1960604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 14:31:18.028021 1960604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 14:31:18.041264 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 14:31:18.059047 1960604 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 14:31:18.059118 1960604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:31:18.070223 1960604 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 14:31:18.070346 1960604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:31:18.082171 1960604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:31:18.093763 1960604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:31:18.104510 1960604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 14:31:18.114309 1960604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:31:18.124834 1960604 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:31:18.135018 1960604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:31:18.145567 1960604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 14:31:18.154218 1960604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 14:31:18.162755 1960604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 14:31:18.260199 1960604 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 14:31:18.390674 1960604 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 14:31:18.390777 1960604 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 14:31:18.394432 1960604 start.go:562] Will wait 60s for crictl version
	I0429 14:31:18.394514 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:31:18.398157 1960604 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 14:31:18.442376 1960604 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0429 14:31:18.442460 1960604 ssh_runner.go:195] Run: crio --version
	I0429 14:31:18.483514 1960604 ssh_runner.go:195] Run: crio --version
	I0429 14:31:18.525524 1960604 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.24.6 ...
	I0429 14:31:18.527186 1960604 cli_runner.go:164] Run: docker network inspect ha-581657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 14:31:18.542607 1960604 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0429 14:31:18.546186 1960604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 14:31:18.556975 1960604 kubeadm.go:877] updating cluster {Name:ha-581657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-581657 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false l
ogviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 14:31:18.557131 1960604 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 14:31:18.557211 1960604 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 14:31:18.609258 1960604 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 14:31:18.609282 1960604 crio.go:433] Images already preloaded, skipping extraction
	I0429 14:31:18.609339 1960604 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 14:31:18.644585 1960604 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 14:31:18.644611 1960604 cache_images.go:84] Images are preloaded, skipping loading
	I0429 14:31:18.644622 1960604 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.0 crio true true} ...
	I0429 14:31:18.644749 1960604 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-581657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-581657 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 14:31:18.644845 1960604 ssh_runner.go:195] Run: crio config
	I0429 14:31:18.702059 1960604 cni.go:84] Creating CNI manager for ""
	I0429 14:31:18.702086 1960604 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0429 14:31:18.702100 1960604 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 14:31:18.702122 1960604 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-581657 NodeName:ha-581657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 14:31:18.702269 1960604 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-581657"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 14:31:18.702289 1960604 kube-vip.go:111] generating kube-vip config ...
	I0429 14:31:18.702351 1960604 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0429 14:31:18.714814 1960604 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0429 14:31:18.714999 1960604 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 14:31:18.715078 1960604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 14:31:18.723664 1960604 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 14:31:18.723732 1960604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0429 14:31:18.732950 1960604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0429 14:31:18.750448 1960604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 14:31:18.768306 1960604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0429 14:31:18.786329 1960604 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0429 14:31:18.804048 1960604 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0429 14:31:18.807629 1960604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 14:31:18.818804 1960604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 14:31:18.900075 1960604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 14:31:18.913418 1960604 certs.go:68] Setting up /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657 for IP: 192.168.49.2
	I0429 14:31:18.913483 1960604 certs.go:194] generating shared ca certs ...
	I0429 14:31:18.913513 1960604 certs.go:226] acquiring lock for ca certs: {Name:mk012c6865f9f1625b7bfd5d0280b6707793520e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:31:18.913668 1960604 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key
	I0429 14:31:18.913753 1960604 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key
	I0429 14:31:18.913789 1960604 certs.go:256] generating profile certs ...
	I0429 14:31:18.913929 1960604 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/client.key
	I0429 14:31:18.913981 1960604 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/apiserver.key.791046dd
	I0429 14:31:18.914010 1960604 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/apiserver.crt.791046dd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0429 14:31:19.562125 1960604 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/apiserver.crt.791046dd ...
	I0429 14:31:19.562158 1960604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/apiserver.crt.791046dd: {Name:mk599b905bba0bee99e5f168b1f1a5c58e7eace5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:31:19.562355 1960604 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/apiserver.key.791046dd ...
	I0429 14:31:19.562371 1960604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/apiserver.key.791046dd: {Name:mkf6b4e104657bd252930722d60d722a2ce96819 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:31:19.562469 1960604 certs.go:381] copying /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/apiserver.crt.791046dd -> /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/apiserver.crt
	I0429 14:31:19.562616 1960604 certs.go:385] copying /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/apiserver.key.791046dd -> /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/apiserver.key
	I0429 14:31:19.562744 1960604 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/proxy-client.key
	I0429 14:31:19.562762 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 14:31:19.562778 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 14:31:19.562794 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 14:31:19.562809 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 14:31:19.562821 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 14:31:19.562835 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 14:31:19.562854 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 14:31:19.562869 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 14:31:19.562921 1960604 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684.pem (1338 bytes)
	W0429 14:31:19.562963 1960604 certs.go:480] ignoring /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684_empty.pem, impossibly tiny 0 bytes
	I0429 14:31:19.562976 1960604 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 14:31:19.563005 1960604 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem (1078 bytes)
	I0429 14:31:19.563033 1960604 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem (1123 bytes)
	I0429 14:31:19.563063 1960604 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem (1679 bytes)
	I0429 14:31:19.563136 1960604 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem (1708 bytes)
	I0429 14:31:19.563172 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem -> /usr/share/ca-certificates/19026842.pem
	I0429 14:31:19.563185 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:31:19.563206 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684.pem -> /usr/share/ca-certificates/1902684.pem
	I0429 14:31:19.563821 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 14:31:19.589367 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 14:31:19.614180 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 14:31:19.638591 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 14:31:19.662702 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0429 14:31:19.688163 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 14:31:19.713661 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 14:31:19.738060 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 14:31:19.762557 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem --> /usr/share/ca-certificates/19026842.pem (1708 bytes)
	I0429 14:31:19.787189 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 14:31:19.811619 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684.pem --> /usr/share/ca-certificates/1902684.pem (1338 bytes)
	I0429 14:31:19.835135 1960604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 14:31:19.852792 1960604 ssh_runner.go:195] Run: openssl version
	I0429 14:31:19.858182 1960604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19026842.pem && ln -fs /usr/share/ca-certificates/19026842.pem /etc/ssl/certs/19026842.pem"
	I0429 14:31:19.867776 1960604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19026842.pem
	I0429 14:31:19.871335 1960604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 14:18 /usr/share/ca-certificates/19026842.pem
	I0429 14:31:19.871402 1960604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19026842.pem
	I0429 14:31:19.878378 1960604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19026842.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 14:31:19.887359 1960604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 14:31:19.896630 1960604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:31:19.900121 1960604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 14:07 /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:31:19.900203 1960604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:31:19.906960 1960604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 14:31:19.915869 1960604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1902684.pem && ln -fs /usr/share/ca-certificates/1902684.pem /etc/ssl/certs/1902684.pem"
	I0429 14:31:19.925444 1960604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1902684.pem
	I0429 14:31:19.928876 1960604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 14:18 /usr/share/ca-certificates/1902684.pem
	I0429 14:31:19.928936 1960604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1902684.pem
	I0429 14:31:19.935697 1960604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1902684.pem /etc/ssl/certs/51391683.0"
	I0429 14:31:19.944511 1960604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 14:31:19.948147 1960604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 14:31:19.955195 1960604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 14:31:19.962095 1960604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 14:31:19.968861 1960604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 14:31:19.975832 1960604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 14:31:19.982819 1960604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 14:31:19.989744 1960604 kubeadm.go:391] StartCluster: {Name:ha-581657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-581657 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logv
iewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 14:31:19.989874 1960604 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 14:31:19.989934 1960604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 14:31:20.037931 1960604 cri.go:89] found id: ""
	I0429 14:31:20.038066 1960604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 14:31:20.047285 1960604 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 14:31:20.047305 1960604 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 14:31:20.047311 1960604 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 14:31:20.047394 1960604 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 14:31:20.056170 1960604 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 14:31:20.056594 1960604 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-581657" does not appear in /home/jenkins/minikube-integration/18771-1897267/kubeconfig
	I0429 14:31:20.056761 1960604 kubeconfig.go:62] /home/jenkins/minikube-integration/18771-1897267/kubeconfig needs updating (will repair): [kubeconfig missing "ha-581657" cluster setting kubeconfig missing "ha-581657" context setting]
	I0429 14:31:20.057037 1960604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/kubeconfig: {Name:mkd7a824e40528d6a3c0c02051ff0aa2d4aebaa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:31:20.057452 1960604 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18771-1897267/kubeconfig
	I0429 14:31:20.057723 1960604 kapi.go:59] client config for ha-581657: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/client.crt", KeyFile:"/home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/client.key", CAFile:"/home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17a1740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 14:31:20.058181 1960604 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 14:31:20.058328 1960604 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 14:31:20.067141 1960604 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.49.2
	I0429 14:31:20.067214 1960604 kubeadm.go:591] duration metric: took 19.896355ms to restartPrimaryControlPlane
	I0429 14:31:20.067230 1960604 kubeadm.go:393] duration metric: took 77.497648ms to StartCluster
	I0429 14:31:20.067246 1960604 settings.go:142] acquiring lock: {Name:mkd5b42c61905151cf6a97c69329c4a81e851953 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:31:20.067315 1960604 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18771-1897267/kubeconfig
	I0429 14:31:20.067944 1960604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18771-1897267/kubeconfig: {Name:mkd7a824e40528d6a3c0c02051ff0aa2d4aebaa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:31:20.068188 1960604 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 14:31:20.068214 1960604 start.go:240] waiting for startup goroutines ...
	I0429 14:31:20.068229 1960604 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 14:31:20.073235 1960604 out.go:177] * Enabled addons: 
	I0429 14:31:20.068525 1960604 config.go:182] Loaded profile config "ha-581657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:31:20.075161 1960604 addons.go:505] duration metric: took 6.930469ms for enable addons: enabled=[]
	I0429 14:31:20.075222 1960604 start.go:245] waiting for cluster config update ...
	I0429 14:31:20.075241 1960604 start.go:254] writing updated cluster config ...
	I0429 14:31:20.077370 1960604 out.go:177] 
	I0429 14:31:20.079425 1960604 config.go:182] Loaded profile config "ha-581657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:31:20.079563 1960604 profile.go:143] Saving config to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/config.json ...
	I0429 14:31:20.081796 1960604 out.go:177] * Starting "ha-581657-m02" control-plane node in "ha-581657" cluster
	I0429 14:31:20.083536 1960604 cache.go:121] Beginning downloading kic base image for docker with crio
	I0429 14:31:20.085458 1960604 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 14:31:20.087232 1960604 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 14:31:20.087258 1960604 cache.go:56] Caching tarball of preloaded images
	I0429 14:31:20.087297 1960604 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 14:31:20.087383 1960604 preload.go:173] Found /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0429 14:31:20.087401 1960604 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 14:31:20.087547 1960604 profile.go:143] Saving config to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/config.json ...
	I0429 14:31:20.103936 1960604 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 14:31:20.103963 1960604 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 14:31:20.103990 1960604 cache.go:194] Successfully downloaded all kic artifacts
	I0429 14:31:20.104029 1960604 start.go:360] acquireMachinesLock for ha-581657-m02: {Name:mk53e78c83910a2a271b302667c610360af9d065 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 14:31:20.104104 1960604 start.go:364] duration metric: took 57.944µs to acquireMachinesLock for "ha-581657-m02"
	I0429 14:31:20.104126 1960604 start.go:96] Skipping create...Using existing machine configuration
	I0429 14:31:20.104131 1960604 fix.go:54] fixHost starting: m02
	I0429 14:31:20.104418 1960604 cli_runner.go:164] Run: docker container inspect ha-581657-m02 --format={{.State.Status}}
	I0429 14:31:20.123227 1960604 fix.go:112] recreateIfNeeded on ha-581657-m02: state=Stopped err=<nil>
	W0429 14:31:20.123253 1960604 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 14:31:20.125307 1960604 out.go:177] * Restarting existing docker container for "ha-581657-m02" ...
	I0429 14:31:20.127340 1960604 cli_runner.go:164] Run: docker start ha-581657-m02
	I0429 14:31:20.435572 1960604 cli_runner.go:164] Run: docker container inspect ha-581657-m02 --format={{.State.Status}}
	I0429 14:31:20.455849 1960604 kic.go:430] container "ha-581657-m02" state is running.
	I0429 14:31:20.456198 1960604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-581657-m02
	I0429 14:31:20.480040 1960604 profile.go:143] Saving config to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/config.json ...
	I0429 14:31:20.480305 1960604 machine.go:94] provisionDockerMachine start ...
	I0429 14:31:20.480379 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657-m02
	I0429 14:31:20.501263 1960604 main.go:141] libmachine: Using SSH client type: native
	I0429 14:31:20.501817 1960604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35107 <nil> <nil>}
	I0429 14:31:20.501838 1960604 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 14:31:20.502489 1960604 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0429 14:31:23.681773 1960604 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-581657-m02
	
	I0429 14:31:23.681798 1960604 ubuntu.go:169] provisioning hostname "ha-581657-m02"
	I0429 14:31:23.681863 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657-m02
	I0429 14:31:23.700463 1960604 main.go:141] libmachine: Using SSH client type: native
	I0429 14:31:23.700823 1960604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35107 <nil> <nil>}
	I0429 14:31:23.700870 1960604 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-581657-m02 && echo "ha-581657-m02" | sudo tee /etc/hostname
	I0429 14:31:23.915268 1960604 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-581657-m02
	
	I0429 14:31:23.915411 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657-m02
	I0429 14:31:23.950013 1960604 main.go:141] libmachine: Using SSH client type: native
	I0429 14:31:23.950256 1960604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35107 <nil> <nil>}
	I0429 14:31:23.950274 1960604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-581657-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-581657-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-581657-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 14:31:24.125762 1960604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 14:31:24.125839 1960604 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18771-1897267/.minikube CaCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18771-1897267/.minikube}
	I0429 14:31:24.125871 1960604 ubuntu.go:177] setting up certificates
	I0429 14:31:24.125911 1960604 provision.go:84] configureAuth start
	I0429 14:31:24.126009 1960604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-581657-m02
	I0429 14:31:24.160019 1960604 provision.go:143] copyHostCerts
	I0429 14:31:24.160063 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem
	I0429 14:31:24.160100 1960604 exec_runner.go:144] found /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem, removing ...
	I0429 14:31:24.160107 1960604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem
	I0429 14:31:24.160185 1960604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem (1123 bytes)
	I0429 14:31:24.160264 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem
	I0429 14:31:24.160279 1960604 exec_runner.go:144] found /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem, removing ...
	I0429 14:31:24.160284 1960604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem
	I0429 14:31:24.160313 1960604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem (1679 bytes)
	I0429 14:31:24.160351 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem
	I0429 14:31:24.160366 1960604 exec_runner.go:144] found /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem, removing ...
	I0429 14:31:24.160370 1960604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem
	I0429 14:31:24.160392 1960604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem (1078 bytes)
	I0429 14:31:24.160473 1960604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem org=jenkins.ha-581657-m02 san=[127.0.0.1 192.168.49.3 ha-581657-m02 localhost minikube]
	I0429 14:31:24.978109 1960604 provision.go:177] copyRemoteCerts
	I0429 14:31:24.978238 1960604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 14:31:24.978318 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657-m02
	I0429 14:31:24.998785 1960604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35107 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/ha-581657-m02/id_rsa Username:docker}
	I0429 14:31:25.104624 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 14:31:25.104706 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 14:31:25.130640 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 14:31:25.130698 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 14:31:25.158718 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 14:31:25.158831 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 14:31:25.189220 1960604 provision.go:87] duration metric: took 1.063277638s to configureAuth
	I0429 14:31:25.189294 1960604 ubuntu.go:193] setting minikube options for container-runtime
	I0429 14:31:25.189571 1960604 config.go:182] Loaded profile config "ha-581657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:31:25.189733 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657-m02
	I0429 14:31:25.213083 1960604 main.go:141] libmachine: Using SSH client type: native
	I0429 14:31:25.213339 1960604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35107 <nil> <nil>}
	I0429 14:31:25.213359 1960604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 14:31:25.675500 1960604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 14:31:25.675525 1960604 machine.go:97] duration metric: took 5.195205328s to provisionDockerMachine
	I0429 14:31:25.675537 1960604 start.go:293] postStartSetup for "ha-581657-m02" (driver="docker")
	I0429 14:31:25.675550 1960604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 14:31:25.675610 1960604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 14:31:25.675656 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657-m02
	I0429 14:31:25.695245 1960604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35107 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/ha-581657-m02/id_rsa Username:docker}
	I0429 14:31:25.786208 1960604 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 14:31:25.792194 1960604 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0429 14:31:25.792229 1960604 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0429 14:31:25.792239 1960604 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0429 14:31:25.792247 1960604 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0429 14:31:25.792258 1960604 filesync.go:126] Scanning /home/jenkins/minikube-integration/18771-1897267/.minikube/addons for local assets ...
	I0429 14:31:25.792318 1960604 filesync.go:126] Scanning /home/jenkins/minikube-integration/18771-1897267/.minikube/files for local assets ...
	I0429 14:31:25.792432 1960604 filesync.go:149] local asset: /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem -> 19026842.pem in /etc/ssl/certs
	I0429 14:31:25.792440 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem -> /etc/ssl/certs/19026842.pem
	I0429 14:31:25.792538 1960604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 14:31:25.806543 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem --> /etc/ssl/certs/19026842.pem (1708 bytes)
	I0429 14:31:25.853137 1960604 start.go:296] duration metric: took 177.583755ms for postStartSetup
	I0429 14:31:25.853227 1960604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 14:31:25.853272 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657-m02
	I0429 14:31:25.907267 1960604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35107 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/ha-581657-m02/id_rsa Username:docker}
	I0429 14:31:26.009528 1960604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 14:31:26.016314 1960604 fix.go:56] duration metric: took 5.912164208s for fixHost
	I0429 14:31:26.016341 1960604 start.go:83] releasing machines lock for "ha-581657-m02", held for 5.912227764s
	I0429 14:31:26.016414 1960604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-581657-m02
	I0429 14:31:26.042804 1960604 out.go:177] * Found network options:
	I0429 14:31:26.045268 1960604 out.go:177]   - NO_PROXY=192.168.49.2
	W0429 14:31:26.047380 1960604 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 14:31:26.047427 1960604 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 14:31:26.047519 1960604 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 14:31:26.047567 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657-m02
	I0429 14:31:26.047802 1960604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 14:31:26.047868 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657-m02
	I0429 14:31:26.087393 1960604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35107 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/ha-581657-m02/id_rsa Username:docker}
	I0429 14:31:26.094415 1960604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35107 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/ha-581657-m02/id_rsa Username:docker}
	I0429 14:31:26.357342 1960604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 14:31:26.371300 1960604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 14:31:26.391333 1960604 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0429 14:31:26.391488 1960604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 14:31:26.415669 1960604 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 14:31:26.415742 1960604 start.go:494] detecting cgroup driver to use...
	I0429 14:31:26.415788 1960604 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0429 14:31:26.415864 1960604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 14:31:26.439660 1960604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 14:31:26.454161 1960604 docker.go:217] disabling cri-docker service (if available) ...
	I0429 14:31:26.454278 1960604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 14:31:26.478825 1960604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 14:31:26.538539 1960604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 14:31:27.059523 1960604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 14:31:27.455949 1960604 docker.go:233] disabling docker service ...
	I0429 14:31:27.456021 1960604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 14:31:27.485207 1960604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 14:31:27.513059 1960604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 14:31:27.806559 1960604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 14:31:28.035061 1960604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 14:31:28.089296 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 14:31:28.155452 1960604 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 14:31:28.155579 1960604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:31:28.197230 1960604 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 14:31:28.197369 1960604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:31:28.241249 1960604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:31:28.283691 1960604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:31:28.318248 1960604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 14:31:28.346933 1960604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:31:28.395672 1960604 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:31:28.447120 1960604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:31:28.495505 1960604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 14:31:28.525163 1960604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 14:31:28.550711 1960604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 14:31:28.862872 1960604 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 14:31:30.321975 1960604 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.459066473s)
	I0429 14:31:30.322003 1960604 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 14:31:30.322065 1960604 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 14:31:30.326732 1960604 start.go:562] Will wait 60s for crictl version
	I0429 14:31:30.326841 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:31:30.337630 1960604 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 14:31:30.418192 1960604 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0429 14:31:30.418359 1960604 ssh_runner.go:195] Run: crio --version
	I0429 14:31:30.487043 1960604 ssh_runner.go:195] Run: crio --version
	I0429 14:31:30.599534 1960604 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.24.6 ...
	I0429 14:31:30.601495 1960604 out.go:177]   - env NO_PROXY=192.168.49.2
	I0429 14:31:30.603565 1960604 cli_runner.go:164] Run: docker network inspect ha-581657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 14:31:30.626344 1960604 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0429 14:31:30.636419 1960604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 14:31:30.664684 1960604 mustload.go:65] Loading cluster: ha-581657
	I0429 14:31:30.664928 1960604 config.go:182] Loaded profile config "ha-581657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:31:30.665234 1960604 cli_runner.go:164] Run: docker container inspect ha-581657 --format={{.State.Status}}
	I0429 14:31:30.683038 1960604 host.go:66] Checking if "ha-581657" exists ...
	I0429 14:31:30.683317 1960604 certs.go:68] Setting up /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657 for IP: 192.168.49.3
	I0429 14:31:30.683332 1960604 certs.go:194] generating shared ca certs ...
	I0429 14:31:30.683348 1960604 certs.go:226] acquiring lock for ca certs: {Name:mk012c6865f9f1625b7bfd5d0280b6707793520e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:31:30.683481 1960604 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key
	I0429 14:31:30.683527 1960604 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key
	I0429 14:31:30.683538 1960604 certs.go:256] generating profile certs ...
	I0429 14:31:30.683614 1960604 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/client.key
	I0429 14:31:30.683677 1960604 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/apiserver.key.6417d708
	I0429 14:31:30.683718 1960604 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/proxy-client.key
	I0429 14:31:30.683731 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 14:31:30.683743 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 14:31:30.683757 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 14:31:30.683770 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 14:31:30.683786 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 14:31:30.683798 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 14:31:30.683812 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 14:31:30.683823 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 14:31:30.683875 1960604 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684.pem (1338 bytes)
	W0429 14:31:30.683918 1960604 certs.go:480] ignoring /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684_empty.pem, impossibly tiny 0 bytes
	I0429 14:31:30.683931 1960604 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 14:31:30.683954 1960604 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem (1078 bytes)
	I0429 14:31:30.683984 1960604 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem (1123 bytes)
	I0429 14:31:30.684009 1960604 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem (1679 bytes)
	I0429 14:31:30.684056 1960604 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem (1708 bytes)
	I0429 14:31:30.684093 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684.pem -> /usr/share/ca-certificates/1902684.pem
	I0429 14:31:30.684110 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem -> /usr/share/ca-certificates/19026842.pem
	I0429 14:31:30.684126 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:31:30.684181 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657
	I0429 14:31:30.701630 1960604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35102 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/ha-581657/id_rsa Username:docker}
	I0429 14:31:30.788965 1960604 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0429 14:31:30.799707 1960604 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0429 14:31:30.832417 1960604 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0429 14:31:30.843148 1960604 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0429 14:31:30.875290 1960604 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0429 14:31:30.887503 1960604 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0429 14:31:30.919280 1960604 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0429 14:31:30.931570 1960604 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0429 14:31:30.962981 1960604 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0429 14:31:30.975397 1960604 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0429 14:31:30.988841 1960604 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0429 14:31:30.993173 1960604 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0429 14:31:31.007837 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 14:31:31.036530 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 14:31:31.070071 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 14:31:31.112360 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 14:31:31.162208 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0429 14:31:31.201310 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 14:31:31.238090 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 14:31:31.282856 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 14:31:31.314327 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684.pem --> /usr/share/ca-certificates/1902684.pem (1338 bytes)
	I0429 14:31:31.344158 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem --> /usr/share/ca-certificates/19026842.pem (1708 bytes)
	I0429 14:31:31.380310 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 14:31:31.414977 1960604 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0429 14:31:31.437450 1960604 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0429 14:31:31.456767 1960604 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0429 14:31:31.486281 1960604 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0429 14:31:31.518949 1960604 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0429 14:31:31.546316 1960604 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0429 14:31:31.566367 1960604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0429 14:31:31.597923 1960604 ssh_runner.go:195] Run: openssl version
	I0429 14:31:31.603983 1960604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1902684.pem && ln -fs /usr/share/ca-certificates/1902684.pem /etc/ssl/certs/1902684.pem"
	I0429 14:31:31.613934 1960604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1902684.pem
	I0429 14:31:31.617832 1960604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 14:18 /usr/share/ca-certificates/1902684.pem
	I0429 14:31:31.617897 1960604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1902684.pem
	I0429 14:31:31.625227 1960604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1902684.pem /etc/ssl/certs/51391683.0"
	I0429 14:31:31.635277 1960604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19026842.pem && ln -fs /usr/share/ca-certificates/19026842.pem /etc/ssl/certs/19026842.pem"
	I0429 14:31:31.646441 1960604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19026842.pem
	I0429 14:31:31.650290 1960604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 14:18 /usr/share/ca-certificates/19026842.pem
	I0429 14:31:31.650360 1960604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19026842.pem
	I0429 14:31:31.658052 1960604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19026842.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 14:31:31.667900 1960604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 14:31:31.678258 1960604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:31:31.682080 1960604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 14:07 /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:31:31.682183 1960604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:31:31.690173 1960604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 14:31:31.699186 1960604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 14:31:31.702799 1960604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 14:31:31.709649 1960604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 14:31:31.717774 1960604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 14:31:31.724615 1960604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 14:31:31.731934 1960604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 14:31:31.739703 1960604 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 14:31:31.747041 1960604 kubeadm.go:928] updating node {m02 192.168.49.3 8443 v1.30.0 crio true true} ...
	I0429 14:31:31.747170 1960604 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-581657-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-581657 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 14:31:31.747226 1960604 kube-vip.go:111] generating kube-vip config ...
	I0429 14:31:31.747284 1960604 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0429 14:31:31.759844 1960604 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0429 14:31:31.759951 1960604 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 14:31:31.760034 1960604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 14:31:31.768806 1960604 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 14:31:31.768953 1960604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0429 14:31:31.777560 1960604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0429 14:31:31.796350 1960604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 14:31:31.815032 1960604 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0429 14:31:31.833683 1960604 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0429 14:31:31.837123 1960604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 14:31:31.847924 1960604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 14:31:31.963990 1960604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 14:31:31.976214 1960604 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 14:31:31.980045 1960604 out.go:177] * Verifying Kubernetes components...
	I0429 14:31:31.976754 1960604 config.go:182] Loaded profile config "ha-581657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:31:31.982165 1960604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 14:31:32.099618 1960604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 14:31:32.114598 1960604 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18771-1897267/kubeconfig
	I0429 14:31:32.114878 1960604 kapi.go:59] client config for ha-581657: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/client.crt", KeyFile:"/home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/client.key", CAFile:"/home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17a1740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0429 14:31:32.114948 1960604 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0429 14:31:32.115915 1960604 node_ready.go:35] waiting up to 6m0s for node "ha-581657-m02" to be "Ready" ...
	I0429 14:31:32.116006 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:31:32.116018 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:32.116027 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:32.116030 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:42.960294 1960604 round_trippers.go:574] Response Status: 500 Internal Server Error in 10844 milliseconds
	I0429 14:31:42.960876 1960604 node_ready.go:53] error getting node "ha-581657-m02": etcdserver: request timed out
	I0429 14:31:42.960940 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:31:42.960945 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:42.960953 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:42.960957 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:45.290029 1960604 round_trippers.go:574] Response Status: 500 Internal Server Error in 2329 milliseconds
	I0429 14:31:45.290145 1960604 node_ready.go:53] error getting node "ha-581657-m02": etcdserver: leader changed
	I0429 14:31:45.290208 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:31:45.290213 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:45.290220 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:45.290225 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:45.329221 1960604 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0429 14:31:45.330052 1960604 node_ready.go:49] node "ha-581657-m02" has status "Ready":"True"
	I0429 14:31:45.330066 1960604 node_ready.go:38] duration metric: took 13.21411973s for node "ha-581657-m02" to be "Ready" ...
	I0429 14:31:45.330076 1960604 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 14:31:45.330149 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0429 14:31:45.330155 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:45.330163 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:45.330167 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:45.392833 1960604 round_trippers.go:574] Response Status: 200 OK in 62 milliseconds
	I0429 14:31:45.410676 1960604 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9nqsr" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:45.410832 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:31:45.410857 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:45.410883 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:45.410903 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:45.421527 1960604 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 14:31:45.423312 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:31:45.423331 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:45.423341 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:45.423345 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:45.429722 1960604 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 14:31:45.430285 1960604 pod_ready.go:92] pod "coredns-7db6d8ff4d-9nqsr" in "kube-system" namespace has status "Ready":"True"
	I0429 14:31:45.430299 1960604 pod_ready.go:81] duration metric: took 19.551274ms for pod "coredns-7db6d8ff4d-9nqsr" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:45.430309 1960604 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qvn8n" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:45.430376 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-qvn8n
	I0429 14:31:45.430381 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:45.430389 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:45.430394 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:45.434045 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:31:45.434828 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:31:45.434878 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:45.434901 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:45.434931 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:45.438926 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:31:45.439919 1960604 pod_ready.go:92] pod "coredns-7db6d8ff4d-qvn8n" in "kube-system" namespace has status "Ready":"True"
	I0429 14:31:45.439981 1960604 pod_ready.go:81] duration metric: took 9.662752ms for pod "coredns-7db6d8ff4d-qvn8n" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:45.440009 1960604 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-581657" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:45.440107 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-581657
	I0429 14:31:45.440141 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:45.440162 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:45.440182 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:45.442941 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:31:45.443645 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:31:45.443695 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:45.443720 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:45.443739 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:45.448536 1960604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 14:31:45.449595 1960604 pod_ready.go:92] pod "etcd-ha-581657" in "kube-system" namespace has status "Ready":"True"
	I0429 14:31:45.449654 1960604 pod_ready.go:81] duration metric: took 9.614185ms for pod "etcd-ha-581657" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:45.449679 1960604 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-581657-m02" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:45.449772 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-581657-m02
	I0429 14:31:45.449804 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:45.449833 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:45.449850 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:45.452105 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:31:45.453071 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:31:45.453119 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:45.453140 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:45.453161 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:45.456266 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:31:45.456943 1960604 pod_ready.go:92] pod "etcd-ha-581657-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 14:31:45.456996 1960604 pod_ready.go:81] duration metric: took 7.297317ms for pod "etcd-ha-581657-m02" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:45.457021 1960604 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-581657-m03" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:45.490310 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-581657-m03
	I0429 14:31:45.490379 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:45.490402 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:45.490421 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:45.493507 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:31:45.690577 1960604 request.go:629] Waited for 196.346874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657-m03
	I0429 14:31:45.690636 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m03
	I0429 14:31:45.690641 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:45.690649 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:45.690660 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:45.693183 1960604 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0429 14:31:45.693312 1960604 pod_ready.go:97] node "ha-581657-m03" hosting pod "etcd-ha-581657-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-581657-m03": nodes "ha-581657-m03" not found
	I0429 14:31:45.693329 1960604 pod_ready.go:81] duration metric: took 236.289434ms for pod "etcd-ha-581657-m03" in "kube-system" namespace to be "Ready" ...
	E0429 14:31:45.693347 1960604 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-581657-m03" hosting pod "etcd-ha-581657-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-581657-m03": nodes "ha-581657-m03" not found
	I0429 14:31:45.693372 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-581657" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:45.890799 1960604 request.go:629] Waited for 197.347188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-581657
	I0429 14:31:45.890879 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-581657
	I0429 14:31:45.890885 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:45.890899 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:45.890905 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:45.893735 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:31:46.090945 1960604 request.go:629] Waited for 196.372187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:31:46.091026 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:31:46.091047 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:46.091059 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:46.091065 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:46.095444 1960604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 14:31:46.096012 1960604 pod_ready.go:92] pod "kube-apiserver-ha-581657" in "kube-system" namespace has status "Ready":"True"
	I0429 14:31:46.096040 1960604 pod_ready.go:81] duration metric: took 402.651603ms for pod "kube-apiserver-ha-581657" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:46.096069 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-581657-m02" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:46.291139 1960604 request.go:629] Waited for 194.999058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-581657-m02
	I0429 14:31:46.291271 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-581657-m02
	I0429 14:31:46.291284 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:46.291293 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:46.291299 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:46.294051 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:31:46.490324 1960604 request.go:629] Waited for 195.241477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:31:46.490471 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:31:46.490484 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:46.490493 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:46.490499 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:46.493299 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:31:46.493862 1960604 pod_ready.go:92] pod "kube-apiserver-ha-581657-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 14:31:46.493883 1960604 pod_ready.go:81] duration metric: took 397.801631ms for pod "kube-apiserver-ha-581657-m02" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:46.493895 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-581657-m03" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:46.690881 1960604 request.go:629] Waited for 196.88603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-581657-m03
	I0429 14:31:46.690944 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-581657-m03
	I0429 14:31:46.690955 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:46.690964 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:46.690972 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:46.693726 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:31:46.890794 1960604 request.go:629] Waited for 196.317492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657-m03
	I0429 14:31:46.890856 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m03
	I0429 14:31:46.890878 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:46.890886 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:46.890892 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:46.893345 1960604 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0429 14:31:46.893461 1960604 pod_ready.go:97] node "ha-581657-m03" hosting pod "kube-apiserver-ha-581657-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-581657-m03": nodes "ha-581657-m03" not found
	I0429 14:31:46.893478 1960604 pod_ready.go:81] duration metric: took 399.559553ms for pod "kube-apiserver-ha-581657-m03" in "kube-system" namespace to be "Ready" ...
	E0429 14:31:46.893489 1960604 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-581657-m03" hosting pod "kube-apiserver-ha-581657-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-581657-m03": nodes "ha-581657-m03" not found
	I0429 14:31:46.893501 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-581657" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:47.090746 1960604 request.go:629] Waited for 197.161515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-581657
	I0429 14:31:47.090816 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-581657
	I0429 14:31:47.090828 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:47.090836 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:47.090847 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:47.093732 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:31:47.291270 1960604 request.go:629] Waited for 196.79405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:31:47.291326 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:31:47.291338 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:47.291348 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:47.291356 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:47.294114 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:31:47.294771 1960604 pod_ready.go:97] node "ha-581657" hosting pod "kube-controller-manager-ha-581657" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-581657" has status "Ready":"False"
	I0429 14:31:47.294796 1960604 pod_ready.go:81] duration metric: took 401.281601ms for pod "kube-controller-manager-ha-581657" in "kube-system" namespace to be "Ready" ...
	E0429 14:31:47.294807 1960604 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-581657" hosting pod "kube-controller-manager-ha-581657" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-581657" has status "Ready":"False"
	I0429 14:31:47.294815 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-581657-m02" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:47.490783 1960604 request.go:629] Waited for 195.890909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-581657-m02
	I0429 14:31:47.490892 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-581657-m02
	I0429 14:31:47.490913 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:47.490921 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:47.490925 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:47.494235 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:31:47.690281 1960604 request.go:629] Waited for 195.136141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:31:47.690346 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:31:47.690358 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:47.690367 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:47.690375 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:47.693051 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:31:47.694160 1960604 pod_ready.go:92] pod "kube-controller-manager-ha-581657-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 14:31:47.694182 1960604 pod_ready.go:81] duration metric: took 399.355344ms for pod "kube-controller-manager-ha-581657-m02" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:47.694194 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-581657-m03" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:47.890790 1960604 request.go:629] Waited for 196.526704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-581657-m03
	I0429 14:31:47.890888 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-581657-m03
	I0429 14:31:47.890899 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:47.890907 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:47.890930 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:47.893911 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:31:48.091105 1960604 request.go:629] Waited for 196.382459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657-m03
	I0429 14:31:48.091186 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m03
	I0429 14:31:48.091192 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:48.091201 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:48.091209 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:48.094069 1960604 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0429 14:31:48.094203 1960604 pod_ready.go:97] node "ha-581657-m03" hosting pod "kube-controller-manager-ha-581657-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-581657-m03": nodes "ha-581657-m03" not found
	I0429 14:31:48.094222 1960604 pod_ready.go:81] duration metric: took 400.021153ms for pod "kube-controller-manager-ha-581657-m03" in "kube-system" namespace to be "Ready" ...
	E0429 14:31:48.094234 1960604 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-581657-m03" hosting pod "kube-controller-manager-ha-581657-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-581657-m03": nodes "ha-581657-m03" not found
	I0429 14:31:48.094247 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6hktv" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:48.290440 1960604 request.go:629] Waited for 196.119363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6hktv
	I0429 14:31:48.290534 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6hktv
	I0429 14:31:48.290546 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:48.290553 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:48.290557 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:48.293258 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:31:48.490262 1960604 request.go:629] Waited for 196.257964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657-m03
	I0429 14:31:48.490391 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m03
	I0429 14:31:48.490423 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:48.490451 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:48.490470 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:48.494339 1960604 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0429 14:31:48.494694 1960604 pod_ready.go:97] node "ha-581657-m03" hosting pod "kube-proxy-6hktv" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-581657-m03": nodes "ha-581657-m03" not found
	I0429 14:31:48.494752 1960604 pod_ready.go:81] duration metric: took 400.493036ms for pod "kube-proxy-6hktv" in "kube-system" namespace to be "Ready" ...
	E0429 14:31:48.494778 1960604 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-581657-m03" hosting pod "kube-proxy-6hktv" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-581657-m03": nodes "ha-581657-m03" not found
	I0429 14:31:48.494814 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d8t8s" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:48.691206 1960604 request.go:629] Waited for 196.309762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8t8s
	I0429 14:31:48.691305 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8t8s
	I0429 14:31:48.691350 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:48.691376 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:48.691396 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:48.694182 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:31:48.890440 1960604 request.go:629] Waited for 195.259635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:31:48.890549 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:31:48.890603 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:48.890616 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:48.890620 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:48.893838 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:31:48.894859 1960604 pod_ready.go:97] node "ha-581657" hosting pod "kube-proxy-d8t8s" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-581657" has status "Ready":"False"
	I0429 14:31:48.894945 1960604 pod_ready.go:81] duration metric: took 400.099201ms for pod "kube-proxy-d8t8s" in "kube-system" namespace to be "Ready" ...
	E0429 14:31:48.894972 1960604 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-581657" hosting pod "kube-proxy-d8t8s" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-581657" has status "Ready":"False"
	I0429 14:31:48.895010 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hshwx" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:49.090294 1960604 request.go:629] Waited for 195.188448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hshwx
	I0429 14:31:49.090407 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hshwx
	I0429 14:31:49.090440 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:49.090467 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:49.090486 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:49.093851 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:31:49.291346 1960604 request.go:629] Waited for 196.327494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657-m04
	I0429 14:31:49.291418 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m04
	I0429 14:31:49.291437 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:49.291446 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:49.291456 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:49.294182 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:31:49.294806 1960604 pod_ready.go:92] pod "kube-proxy-hshwx" in "kube-system" namespace has status "Ready":"True"
	I0429 14:31:49.294832 1960604 pod_ready.go:81] duration metric: took 399.800987ms for pod "kube-proxy-hshwx" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:49.294865 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zhbtq" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:49.491247 1960604 request.go:629] Waited for 196.30689ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhbtq
	I0429 14:31:49.491307 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhbtq
	I0429 14:31:49.491313 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:49.491321 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:49.491329 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:49.499371 1960604 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 14:31:49.690593 1960604 request.go:629] Waited for 190.26764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:31:49.690669 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:31:49.690676 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:49.690684 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:49.690693 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:49.699607 1960604 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 14:31:49.700341 1960604 pod_ready.go:92] pod "kube-proxy-zhbtq" in "kube-system" namespace has status "Ready":"True"
	I0429 14:31:49.700364 1960604 pod_ready.go:81] duration metric: took 405.482364ms for pod "kube-proxy-zhbtq" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:49.700376 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-581657" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:49.890846 1960604 request.go:629] Waited for 190.409703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-581657
	I0429 14:31:49.890911 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-581657
	I0429 14:31:49.890922 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:49.890931 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:49.890941 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:49.899821 1960604 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 14:31:50.091084 1960604 request.go:629] Waited for 190.484091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:31:50.091141 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:31:50.091158 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:50.091168 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:50.091172 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:50.097781 1960604 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 14:31:50.099503 1960604 pod_ready.go:97] node "ha-581657" hosting pod "kube-scheduler-ha-581657" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-581657" has status "Ready":"False"
	I0429 14:31:50.099534 1960604 pod_ready.go:81] duration metric: took 399.150119ms for pod "kube-scheduler-ha-581657" in "kube-system" namespace to be "Ready" ...
	E0429 14:31:50.099546 1960604 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-581657" hosting pod "kube-scheduler-ha-581657" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-581657" has status "Ready":"False"
	I0429 14:31:50.099553 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-581657-m02" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:50.291087 1960604 request.go:629] Waited for 191.440688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-581657-m02
	I0429 14:31:50.291161 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-581657-m02
	I0429 14:31:50.291171 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:50.291179 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:50.291183 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:50.294570 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:31:50.490474 1960604 request.go:629] Waited for 195.225584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:31:50.490547 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:31:50.490556 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:50.490585 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:50.490591 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:50.493259 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:31:50.494072 1960604 pod_ready.go:92] pod "kube-scheduler-ha-581657-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 14:31:50.494095 1960604 pod_ready.go:81] duration metric: took 394.534008ms for pod "kube-scheduler-ha-581657-m02" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:50.494106 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-581657-m03" in "kube-system" namespace to be "Ready" ...
	I0429 14:31:50.690409 1960604 request.go:629] Waited for 196.239461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-581657-m03
	I0429 14:31:50.690488 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-581657-m03
	I0429 14:31:50.690497 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:50.690506 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:50.690516 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:50.693197 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:31:50.891182 1960604 request.go:629] Waited for 197.325018ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657-m03
	I0429 14:31:50.891241 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m03
	I0429 14:31:50.891265 1960604 round_trippers.go:469] Request Headers:
	I0429 14:31:50.891293 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:31:50.891303 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:31:50.894059 1960604 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0429 14:31:50.894178 1960604 pod_ready.go:97] node "ha-581657-m03" hosting pod "kube-scheduler-ha-581657-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-581657-m03": nodes "ha-581657-m03" not found
	I0429 14:31:50.894195 1960604 pod_ready.go:81] duration metric: took 400.081749ms for pod "kube-scheduler-ha-581657-m03" in "kube-system" namespace to be "Ready" ...
	E0429 14:31:50.894206 1960604 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-581657-m03" hosting pod "kube-scheduler-ha-581657-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-581657-m03": nodes "ha-581657-m03" not found
	I0429 14:31:50.894217 1960604 pod_ready.go:38] duration metric: took 5.56413121s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 14:31:50.894236 1960604 api_server.go:52] waiting for apiserver process to appear ...
	I0429 14:31:50.894320 1960604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 14:31:50.906651 1960604 api_server.go:72] duration metric: took 18.930345373s to wait for apiserver process to appear ...
	I0429 14:31:50.906677 1960604 api_server.go:88] waiting for apiserver healthz status ...
	I0429 14:31:50.906696 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:31:50.914275 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:31:50.914304 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:31:51.406826 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:31:51.415510 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:31:51.415555 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:31:51.907102 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:31:51.914911 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:31:51.914940 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:31:52.406818 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:31:52.415314 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:31:52.415339 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:31:52.906816 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:31:52.917522 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:31:52.917561 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:31:53.406814 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:31:53.432836 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:31:53.432875 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:31:53.907453 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:31:53.920356 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:31:53.920399 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:31:54.406761 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:31:54.416238 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:31:54.416273 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:31:54.906835 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:31:54.915930 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:31:54.915969 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:31:55.407399 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:31:55.416742 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:31:55.416775 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:31:55.907283 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:31:55.915315 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:31:55.915350 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:31:56.406822 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:31:56.414349 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:31:56.414384 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:31:56.906894 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:31:56.916122 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:31:56.916148 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:31:57.407058 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:31:57.414714 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:31:57.414753 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:31:57.907039 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:31:57.914845 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:31:57.914874 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:31:58.407298 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:31:58.420087 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:31:58.420118 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:31:58.906750 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:31:58.914587 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:31:58.914614 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:31:59.406806 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:31:59.416226 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:31:59.416255 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:31:59.906759 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:31:59.914384 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:31:59.914420 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:00.406903 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:00.431543 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:00.431587 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:00.906825 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:00.914402 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:00.914428 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:01.407625 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:01.416679 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:01.416710 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:01.907206 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:01.915041 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:01.915068 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:02.407605 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:02.421981 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:02.422012 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:02.907587 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:02.919107 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:02.919186 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:03.407753 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:03.418889 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:03.418922 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:03.907278 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:03.917022 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:03.917049 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:04.407705 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:04.415342 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:04.415375 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:04.906825 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:04.914509 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:04.914542 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:05.406985 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:05.414532 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:05.414558 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:05.906819 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:05.914306 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:05.914342 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:06.406804 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:06.415259 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:06.415361 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:06.907661 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:06.915657 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:06.915688 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:07.407318 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:07.414842 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:07.414872 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:07.907343 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:07.920714 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:07.920743 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:08.406982 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:08.414641 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:08.414669 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:08.907209 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:08.915799 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:08.915828 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:09.407368 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:09.416986 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:09.417028 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:09.907312 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:09.914920 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:09.914948 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:10.407284 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:10.415307 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:10.415338 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:10.906835 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:10.916034 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:10.916068 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:11.407210 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:11.456322 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:11.456357 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:11.906851 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:12.021719 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:12.021756 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:12.407503 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:12.468835 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:12.468879 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:12.907022 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:12.915133 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:12.915155 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:13.407763 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:13.416685 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:13.416710 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:13.907292 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:13.915769 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:13.915795 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:14.407350 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:14.422058 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:14.422083 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:14.907732 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:14.917228 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:14.917262 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:15.407678 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:15.415175 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:15.415200 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:15.907777 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:15.915507 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:15.915534 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:16.406818 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:16.414600 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:16.414632 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:16.907072 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:16.914771 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:16.914804 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:17.407268 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:17.415273 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:17.415297 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:17.907476 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:17.920793 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:17.920823 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:18.407241 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:18.419027 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:18.419055 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:18.907460 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:18.914999 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:18.915027 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:19.407420 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:19.417012 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:19.417046 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:19.907686 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:19.915762 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:19.915793 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:20.407549 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:20.415292 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:20.415318 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:20.907389 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:20.914980 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:20.915009 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:21.407570 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:21.415337 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:21.415364 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:21.906827 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:21.915831 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:21.915871 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:22.407501 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:22.417218 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:22.417247 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:22.906822 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:22.914347 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:22.914420 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:23.406828 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:23.414568 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:23.414598 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:23.907001 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:23.914638 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:23.914663 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:24.407178 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:24.414729 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:24.414754 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:24.906910 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:24.914532 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:24.914559 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:25.406972 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:25.414706 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:25.414735 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:25.906888 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:25.914350 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:25.914383 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:26.406878 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:26.415174 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:26.415202 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:26.906808 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:26.914383 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:26.914411 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:27.407306 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:27.414898 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:27.414935 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:27.907518 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:27.915386 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:27.915411 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:28.406915 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:28.414578 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:28.414607 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:28.906828 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:28.914456 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:28.914485 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:29.406993 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:29.416384 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:29.416418 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:29.907704 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:29.915506 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:29.915545 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:30.407275 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:30.414804 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:30.414831 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:30.907540 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:30.915536 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:30.915565 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:31.407065 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:31.509536 1960604 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": read tcp 192.168.49.1:38078->192.168.49.2:8443: read: connection reset by peer
	I0429 14:32:31.906829 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:31.907298 1960604 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused
	I0429 14:32:32.407255 1960604 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 14:32:32.407337 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 14:32:32.475882 1960604 cri.go:89] found id: "d289e4bc5242b0e6e9ec798d1ee21ab39fa2dd99d88a0d8e682ffb72b5abd324"
	I0429 14:32:32.475904 1960604 cri.go:89] found id: "6f772ffb17737d5de169231d2d140f2e3f5dfc7bbb6c4c04fc10695eb2170463"
	I0429 14:32:32.475910 1960604 cri.go:89] found id: ""
	I0429 14:32:32.475917 1960604 logs.go:276] 2 containers: [d289e4bc5242b0e6e9ec798d1ee21ab39fa2dd99d88a0d8e682ffb72b5abd324 6f772ffb17737d5de169231d2d140f2e3f5dfc7bbb6c4c04fc10695eb2170463]
	I0429 14:32:32.475971 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:32.482639 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:32.486487 1960604 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 14:32:32.486551 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 14:32:32.576773 1960604 cri.go:89] found id: "5d59422af5b586e4f9b6daf7e9e52e83676a91539bb628653c2bebbd1a53da93"
	I0429 14:32:32.576796 1960604 cri.go:89] found id: "d35e835859cc855613d6afd4c72d21dcfdf659239cbe2a38ede0431ba2200ef1"
	I0429 14:32:32.576802 1960604 cri.go:89] found id: ""
	I0429 14:32:32.576809 1960604 logs.go:276] 2 containers: [5d59422af5b586e4f9b6daf7e9e52e83676a91539bb628653c2bebbd1a53da93 d35e835859cc855613d6afd4c72d21dcfdf659239cbe2a38ede0431ba2200ef1]
	I0429 14:32:32.576865 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:32.580532 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:32.584041 1960604 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 14:32:32.584115 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 14:32:32.633713 1960604 cri.go:89] found id: ""
	I0429 14:32:32.633744 1960604 logs.go:276] 0 containers: []
	W0429 14:32:32.633754 1960604 logs.go:278] No container was found matching "coredns"
	I0429 14:32:32.633761 1960604 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 14:32:32.633827 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 14:32:32.681845 1960604 cri.go:89] found id: "d0d8d0efa7ef0e6d3de917a91fa542ed2434f8974a5a391e678c96c0df343949"
	I0429 14:32:32.681873 1960604 cri.go:89] found id: "c9db617e65b90f2451176a8e0eaa4bbbcdf714b67cc252031b3b2766e4aed738"
	I0429 14:32:32.681881 1960604 cri.go:89] found id: ""
	I0429 14:32:32.681888 1960604 logs.go:276] 2 containers: [d0d8d0efa7ef0e6d3de917a91fa542ed2434f8974a5a391e678c96c0df343949 c9db617e65b90f2451176a8e0eaa4bbbcdf714b67cc252031b3b2766e4aed738]
	I0429 14:32:32.681951 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:32.685902 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:32.689515 1960604 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 14:32:32.689597 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 14:32:32.740688 1960604 cri.go:89] found id: "c32d11510ebd6c0b5c2b9c6ab8115ed2148c7fe281af388d43e51b3ff30dbdc4"
	I0429 14:32:32.740712 1960604 cri.go:89] found id: ""
	I0429 14:32:32.740721 1960604 logs.go:276] 1 containers: [c32d11510ebd6c0b5c2b9c6ab8115ed2148c7fe281af388d43e51b3ff30dbdc4]
	I0429 14:32:32.740774 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:32.749162 1960604 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 14:32:32.749235 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 14:32:32.811423 1960604 cri.go:89] found id: "62b1729ce58be8c6d93e056736e47f933b97b0e10292ce383dce3a05fb88556d"
	I0429 14:32:32.811441 1960604 cri.go:89] found id: "169574240cfc84c921ac61f6f654e6969bf336051b3e17400e4d8f6b1e017f00"
	I0429 14:32:32.811446 1960604 cri.go:89] found id: ""
	I0429 14:32:32.811452 1960604 logs.go:276] 2 containers: [62b1729ce58be8c6d93e056736e47f933b97b0e10292ce383dce3a05fb88556d 169574240cfc84c921ac61f6f654e6969bf336051b3e17400e4d8f6b1e017f00]
	I0429 14:32:32.811506 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:32.815577 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:32.819167 1960604 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 14:32:32.819225 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 14:32:32.866381 1960604 cri.go:89] found id: "e369fa475b66d2f4ad15aa1fd54695bc9ca686251f4e9a39ffc142af76c19d56"
	I0429 14:32:32.866401 1960604 cri.go:89] found id: ""
	I0429 14:32:32.866408 1960604 logs.go:276] 1 containers: [e369fa475b66d2f4ad15aa1fd54695bc9ca686251f4e9a39ffc142af76c19d56]
	I0429 14:32:32.866462 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:32.870332 1960604 logs.go:123] Gathering logs for kube-apiserver [6f772ffb17737d5de169231d2d140f2e3f5dfc7bbb6c4c04fc10695eb2170463] ...
	I0429 14:32:32.870351 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f772ffb17737d5de169231d2d140f2e3f5dfc7bbb6c4c04fc10695eb2170463"
	I0429 14:32:32.925928 1960604 logs.go:123] Gathering logs for kube-controller-manager [169574240cfc84c921ac61f6f654e6969bf336051b3e17400e4d8f6b1e017f00] ...
	I0429 14:32:32.925957 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169574240cfc84c921ac61f6f654e6969bf336051b3e17400e4d8f6b1e017f00"
	I0429 14:32:32.980531 1960604 logs.go:123] Gathering logs for kindnet [e369fa475b66d2f4ad15aa1fd54695bc9ca686251f4e9a39ffc142af76c19d56] ...
	I0429 14:32:32.980557 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e369fa475b66d2f4ad15aa1fd54695bc9ca686251f4e9a39ffc142af76c19d56"
	I0429 14:32:33.040519 1960604 logs.go:123] Gathering logs for container status ...
	I0429 14:32:33.040584 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 14:32:33.114941 1960604 logs.go:123] Gathering logs for kubelet ...
	I0429 14:32:33.114966 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 14:32:33.196826 1960604 logs.go:123] Gathering logs for describe nodes ...
	I0429 14:32:33.196861 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 14:32:33.587818 1960604 logs.go:123] Gathering logs for kube-scheduler [c9db617e65b90f2451176a8e0eaa4bbbcdf714b67cc252031b3b2766e4aed738] ...
	I0429 14:32:33.587864 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9db617e65b90f2451176a8e0eaa4bbbcdf714b67cc252031b3b2766e4aed738"
	I0429 14:32:33.635281 1960604 logs.go:123] Gathering logs for kube-proxy [c32d11510ebd6c0b5c2b9c6ab8115ed2148c7fe281af388d43e51b3ff30dbdc4] ...
	I0429 14:32:33.635311 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32d11510ebd6c0b5c2b9c6ab8115ed2148c7fe281af388d43e51b3ff30dbdc4"
	I0429 14:32:33.690230 1960604 logs.go:123] Gathering logs for kube-apiserver [d289e4bc5242b0e6e9ec798d1ee21ab39fa2dd99d88a0d8e682ffb72b5abd324] ...
	I0429 14:32:33.690258 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d289e4bc5242b0e6e9ec798d1ee21ab39fa2dd99d88a0d8e682ffb72b5abd324"
	I0429 14:32:33.763191 1960604 logs.go:123] Gathering logs for etcd [d35e835859cc855613d6afd4c72d21dcfdf659239cbe2a38ede0431ba2200ef1] ...
	I0429 14:32:33.763272 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d35e835859cc855613d6afd4c72d21dcfdf659239cbe2a38ede0431ba2200ef1"
	I0429 14:32:33.835878 1960604 logs.go:123] Gathering logs for etcd [5d59422af5b586e4f9b6daf7e9e52e83676a91539bb628653c2bebbd1a53da93] ...
	I0429 14:32:33.836071 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d59422af5b586e4f9b6daf7e9e52e83676a91539bb628653c2bebbd1a53da93"
	I0429 14:32:33.895964 1960604 logs.go:123] Gathering logs for kube-controller-manager [62b1729ce58be8c6d93e056736e47f933b97b0e10292ce383dce3a05fb88556d] ...
	I0429 14:32:33.897061 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b1729ce58be8c6d93e056736e47f933b97b0e10292ce383dce3a05fb88556d"
	I0429 14:32:34.000611 1960604 logs.go:123] Gathering logs for CRI-O ...
	I0429 14:32:34.000825 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 14:32:34.082187 1960604 logs.go:123] Gathering logs for dmesg ...
	I0429 14:32:34.082267 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 14:32:34.103729 1960604 logs.go:123] Gathering logs for kube-scheduler [d0d8d0efa7ef0e6d3de917a91fa542ed2434f8974a5a391e678c96c0df343949] ...
	I0429 14:32:34.103932 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0d8d0efa7ef0e6d3de917a91fa542ed2434f8974a5a391e678c96c0df343949"
	I0429 14:32:36.673822 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:36.681729 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 14:32:36.681760 1960604 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 14:32:36.681794 1960604 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 14:32:36.681866 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 14:32:36.725038 1960604 cri.go:89] found id: "d289e4bc5242b0e6e9ec798d1ee21ab39fa2dd99d88a0d8e682ffb72b5abd324"
	I0429 14:32:36.725061 1960604 cri.go:89] found id: "6f772ffb17737d5de169231d2d140f2e3f5dfc7bbb6c4c04fc10695eb2170463"
	I0429 14:32:36.725067 1960604 cri.go:89] found id: ""
	I0429 14:32:36.725074 1960604 logs.go:276] 2 containers: [d289e4bc5242b0e6e9ec798d1ee21ab39fa2dd99d88a0d8e682ffb72b5abd324 6f772ffb17737d5de169231d2d140f2e3f5dfc7bbb6c4c04fc10695eb2170463]
	I0429 14:32:36.725129 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:36.728904 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:36.732214 1960604 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 14:32:36.732281 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 14:32:36.784206 1960604 cri.go:89] found id: "5d59422af5b586e4f9b6daf7e9e52e83676a91539bb628653c2bebbd1a53da93"
	I0429 14:32:36.784225 1960604 cri.go:89] found id: "d35e835859cc855613d6afd4c72d21dcfdf659239cbe2a38ede0431ba2200ef1"
	I0429 14:32:36.784230 1960604 cri.go:89] found id: ""
	I0429 14:32:36.784237 1960604 logs.go:276] 2 containers: [5d59422af5b586e4f9b6daf7e9e52e83676a91539bb628653c2bebbd1a53da93 d35e835859cc855613d6afd4c72d21dcfdf659239cbe2a38ede0431ba2200ef1]
	I0429 14:32:36.784300 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:36.787918 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:36.791984 1960604 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 14:32:36.792050 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 14:32:36.830139 1960604 cri.go:89] found id: ""
	I0429 14:32:36.830165 1960604 logs.go:276] 0 containers: []
	W0429 14:32:36.830174 1960604 logs.go:278] No container was found matching "coredns"
	I0429 14:32:36.830182 1960604 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 14:32:36.830239 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 14:32:36.870234 1960604 cri.go:89] found id: "d0d8d0efa7ef0e6d3de917a91fa542ed2434f8974a5a391e678c96c0df343949"
	I0429 14:32:36.870255 1960604 cri.go:89] found id: "c9db617e65b90f2451176a8e0eaa4bbbcdf714b67cc252031b3b2766e4aed738"
	I0429 14:32:36.870260 1960604 cri.go:89] found id: ""
	I0429 14:32:36.870267 1960604 logs.go:276] 2 containers: [d0d8d0efa7ef0e6d3de917a91fa542ed2434f8974a5a391e678c96c0df343949 c9db617e65b90f2451176a8e0eaa4bbbcdf714b67cc252031b3b2766e4aed738]
	I0429 14:32:36.870323 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:36.874030 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:36.879112 1960604 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 14:32:36.879178 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 14:32:36.944871 1960604 cri.go:89] found id: "c32d11510ebd6c0b5c2b9c6ab8115ed2148c7fe281af388d43e51b3ff30dbdc4"
	I0429 14:32:36.944890 1960604 cri.go:89] found id: ""
	I0429 14:32:36.944898 1960604 logs.go:276] 1 containers: [c32d11510ebd6c0b5c2b9c6ab8115ed2148c7fe281af388d43e51b3ff30dbdc4]
	I0429 14:32:36.944952 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:36.949398 1960604 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 14:32:36.949470 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 14:32:37.008384 1960604 cri.go:89] found id: "62b1729ce58be8c6d93e056736e47f933b97b0e10292ce383dce3a05fb88556d"
	I0429 14:32:37.008465 1960604 cri.go:89] found id: "169574240cfc84c921ac61f6f654e6969bf336051b3e17400e4d8f6b1e017f00"
	I0429 14:32:37.008483 1960604 cri.go:89] found id: ""
	I0429 14:32:37.008507 1960604 logs.go:276] 2 containers: [62b1729ce58be8c6d93e056736e47f933b97b0e10292ce383dce3a05fb88556d 169574240cfc84c921ac61f6f654e6969bf336051b3e17400e4d8f6b1e017f00]
	I0429 14:32:37.008601 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:37.015108 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:37.020658 1960604 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 14:32:37.020799 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 14:32:37.095020 1960604 cri.go:89] found id: "e369fa475b66d2f4ad15aa1fd54695bc9ca686251f4e9a39ffc142af76c19d56"
	I0429 14:32:37.095045 1960604 cri.go:89] found id: ""
	I0429 14:32:37.095054 1960604 logs.go:276] 1 containers: [e369fa475b66d2f4ad15aa1fd54695bc9ca686251f4e9a39ffc142af76c19d56]
	I0429 14:32:37.095135 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:37.102465 1960604 logs.go:123] Gathering logs for kube-apiserver [6f772ffb17737d5de169231d2d140f2e3f5dfc7bbb6c4c04fc10695eb2170463] ...
	I0429 14:32:37.102499 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f772ffb17737d5de169231d2d140f2e3f5dfc7bbb6c4c04fc10695eb2170463"
	I0429 14:32:37.195762 1960604 logs.go:123] Gathering logs for etcd [5d59422af5b586e4f9b6daf7e9e52e83676a91539bb628653c2bebbd1a53da93] ...
	I0429 14:32:37.195791 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d59422af5b586e4f9b6daf7e9e52e83676a91539bb628653c2bebbd1a53da93"
	I0429 14:32:37.252962 1960604 logs.go:123] Gathering logs for kube-controller-manager [62b1729ce58be8c6d93e056736e47f933b97b0e10292ce383dce3a05fb88556d] ...
	I0429 14:32:37.253002 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b1729ce58be8c6d93e056736e47f933b97b0e10292ce383dce3a05fb88556d"
	I0429 14:32:37.343617 1960604 logs.go:123] Gathering logs for kube-controller-manager [169574240cfc84c921ac61f6f654e6969bf336051b3e17400e4d8f6b1e017f00] ...
	I0429 14:32:37.343653 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169574240cfc84c921ac61f6f654e6969bf336051b3e17400e4d8f6b1e017f00"
	I0429 14:32:37.407186 1960604 logs.go:123] Gathering logs for kube-apiserver [d289e4bc5242b0e6e9ec798d1ee21ab39fa2dd99d88a0d8e682ffb72b5abd324] ...
	I0429 14:32:37.407220 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d289e4bc5242b0e6e9ec798d1ee21ab39fa2dd99d88a0d8e682ffb72b5abd324"
	I0429 14:32:37.495341 1960604 logs.go:123] Gathering logs for kube-proxy [c32d11510ebd6c0b5c2b9c6ab8115ed2148c7fe281af388d43e51b3ff30dbdc4] ...
	I0429 14:32:37.495376 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32d11510ebd6c0b5c2b9c6ab8115ed2148c7fe281af388d43e51b3ff30dbdc4"
	I0429 14:32:37.546794 1960604 logs.go:123] Gathering logs for kindnet [e369fa475b66d2f4ad15aa1fd54695bc9ca686251f4e9a39ffc142af76c19d56] ...
	I0429 14:32:37.546830 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e369fa475b66d2f4ad15aa1fd54695bc9ca686251f4e9a39ffc142af76c19d56"
	I0429 14:32:37.592737 1960604 logs.go:123] Gathering logs for CRI-O ...
	I0429 14:32:37.592766 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 14:32:37.662675 1960604 logs.go:123] Gathering logs for container status ...
	I0429 14:32:37.662711 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 14:32:37.704797 1960604 logs.go:123] Gathering logs for kubelet ...
	I0429 14:32:37.704825 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 14:32:37.781635 1960604 logs.go:123] Gathering logs for describe nodes ...
	I0429 14:32:37.781670 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 14:32:38.028830 1960604 logs.go:123] Gathering logs for etcd [d35e835859cc855613d6afd4c72d21dcfdf659239cbe2a38ede0431ba2200ef1] ...
	I0429 14:32:38.028863 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d35e835859cc855613d6afd4c72d21dcfdf659239cbe2a38ede0431ba2200ef1"
	I0429 14:32:38.093617 1960604 logs.go:123] Gathering logs for kube-scheduler [d0d8d0efa7ef0e6d3de917a91fa542ed2434f8974a5a391e678c96c0df343949] ...
	I0429 14:32:38.093650 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0d8d0efa7ef0e6d3de917a91fa542ed2434f8974a5a391e678c96c0df343949"
	I0429 14:32:38.138594 1960604 logs.go:123] Gathering logs for kube-scheduler [c9db617e65b90f2451176a8e0eaa4bbbcdf714b67cc252031b3b2766e4aed738] ...
	I0429 14:32:38.138626 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9db617e65b90f2451176a8e0eaa4bbbcdf714b67cc252031b3b2766e4aed738"
	I0429 14:32:38.185416 1960604 logs.go:123] Gathering logs for dmesg ...
	I0429 14:32:38.185442 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 14:32:40.706486 1960604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0429 14:32:40.715977 1960604 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0429 14:32:40.716047 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0429 14:32:40.716053 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:40.716062 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:40.716066 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:40.730191 1960604 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 14:32:40.730359 1960604 api_server.go:141] control plane version: v1.30.0
	I0429 14:32:40.730379 1960604 api_server.go:131] duration metric: took 49.823695701s to wait for apiserver health ...
	I0429 14:32:40.730388 1960604 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 14:32:40.730415 1960604 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 14:32:40.730491 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 14:32:40.782362 1960604 cri.go:89] found id: "d289e4bc5242b0e6e9ec798d1ee21ab39fa2dd99d88a0d8e682ffb72b5abd324"
	I0429 14:32:40.782437 1960604 cri.go:89] found id: "6f772ffb17737d5de169231d2d140f2e3f5dfc7bbb6c4c04fc10695eb2170463"
	I0429 14:32:40.782448 1960604 cri.go:89] found id: ""
	I0429 14:32:40.782457 1960604 logs.go:276] 2 containers: [d289e4bc5242b0e6e9ec798d1ee21ab39fa2dd99d88a0d8e682ffb72b5abd324 6f772ffb17737d5de169231d2d140f2e3f5dfc7bbb6c4c04fc10695eb2170463]
	I0429 14:32:40.782522 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:40.786294 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:40.789818 1960604 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 14:32:40.789901 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 14:32:40.830073 1960604 cri.go:89] found id: "5d59422af5b586e4f9b6daf7e9e52e83676a91539bb628653c2bebbd1a53da93"
	I0429 14:32:40.830094 1960604 cri.go:89] found id: "d35e835859cc855613d6afd4c72d21dcfdf659239cbe2a38ede0431ba2200ef1"
	I0429 14:32:40.830099 1960604 cri.go:89] found id: ""
	I0429 14:32:40.830106 1960604 logs.go:276] 2 containers: [5d59422af5b586e4f9b6daf7e9e52e83676a91539bb628653c2bebbd1a53da93 d35e835859cc855613d6afd4c72d21dcfdf659239cbe2a38ede0431ba2200ef1]
	I0429 14:32:40.830173 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:40.833958 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:40.837461 1960604 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 14:32:40.837561 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 14:32:40.874032 1960604 cri.go:89] found id: ""
	I0429 14:32:40.874068 1960604 logs.go:276] 0 containers: []
	W0429 14:32:40.874078 1960604 logs.go:278] No container was found matching "coredns"
	I0429 14:32:40.874085 1960604 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 14:32:40.874145 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 14:32:40.911867 1960604 cri.go:89] found id: "d0d8d0efa7ef0e6d3de917a91fa542ed2434f8974a5a391e678c96c0df343949"
	I0429 14:32:40.911889 1960604 cri.go:89] found id: "c9db617e65b90f2451176a8e0eaa4bbbcdf714b67cc252031b3b2766e4aed738"
	I0429 14:32:40.911894 1960604 cri.go:89] found id: ""
	I0429 14:32:40.911902 1960604 logs.go:276] 2 containers: [d0d8d0efa7ef0e6d3de917a91fa542ed2434f8974a5a391e678c96c0df343949 c9db617e65b90f2451176a8e0eaa4bbbcdf714b67cc252031b3b2766e4aed738]
	I0429 14:32:40.911960 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:40.916187 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:40.919697 1960604 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 14:32:40.919824 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 14:32:40.960890 1960604 cri.go:89] found id: "c32d11510ebd6c0b5c2b9c6ab8115ed2148c7fe281af388d43e51b3ff30dbdc4"
	I0429 14:32:40.960912 1960604 cri.go:89] found id: ""
	I0429 14:32:40.960919 1960604 logs.go:276] 1 containers: [c32d11510ebd6c0b5c2b9c6ab8115ed2148c7fe281af388d43e51b3ff30dbdc4]
	I0429 14:32:40.960973 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:40.964548 1960604 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 14:32:40.964643 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 14:32:41.006746 1960604 cri.go:89] found id: "62b1729ce58be8c6d93e056736e47f933b97b0e10292ce383dce3a05fb88556d"
	I0429 14:32:41.006814 1960604 cri.go:89] found id: "169574240cfc84c921ac61f6f654e6969bf336051b3e17400e4d8f6b1e017f00"
	I0429 14:32:41.006847 1960604 cri.go:89] found id: ""
	I0429 14:32:41.006860 1960604 logs.go:276] 2 containers: [62b1729ce58be8c6d93e056736e47f933b97b0e10292ce383dce3a05fb88556d 169574240cfc84c921ac61f6f654e6969bf336051b3e17400e4d8f6b1e017f00]
	I0429 14:32:41.006927 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:41.010583 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:41.014138 1960604 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 14:32:41.014214 1960604 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 14:32:41.051435 1960604 cri.go:89] found id: "e369fa475b66d2f4ad15aa1fd54695bc9ca686251f4e9a39ffc142af76c19d56"
	I0429 14:32:41.051459 1960604 cri.go:89] found id: ""
	I0429 14:32:41.051467 1960604 logs.go:276] 1 containers: [e369fa475b66d2f4ad15aa1fd54695bc9ca686251f4e9a39ffc142af76c19d56]
	I0429 14:32:41.051529 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:41.055375 1960604 logs.go:123] Gathering logs for kube-apiserver [6f772ffb17737d5de169231d2d140f2e3f5dfc7bbb6c4c04fc10695eb2170463] ...
	I0429 14:32:41.055401 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f772ffb17737d5de169231d2d140f2e3f5dfc7bbb6c4c04fc10695eb2170463"
	I0429 14:32:41.092940 1960604 logs.go:123] Gathering logs for kube-scheduler [d0d8d0efa7ef0e6d3de917a91fa542ed2434f8974a5a391e678c96c0df343949] ...
	I0429 14:32:41.092968 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d0d8d0efa7ef0e6d3de917a91fa542ed2434f8974a5a391e678c96c0df343949"
	I0429 14:32:41.132534 1960604 logs.go:123] Gathering logs for container status ...
	I0429 14:32:41.132567 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 14:32:41.195060 1960604 logs.go:123] Gathering logs for kubelet ...
	I0429 14:32:41.195085 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 14:32:41.281079 1960604 logs.go:123] Gathering logs for dmesg ...
	I0429 14:32:41.281115 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 14:32:41.301233 1960604 logs.go:123] Gathering logs for kube-scheduler [c9db617e65b90f2451176a8e0eaa4bbbcdf714b67cc252031b3b2766e4aed738] ...
	I0429 14:32:41.301265 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9db617e65b90f2451176a8e0eaa4bbbcdf714b67cc252031b3b2766e4aed738"
	I0429 14:32:41.344151 1960604 logs.go:123] Gathering logs for kindnet [e369fa475b66d2f4ad15aa1fd54695bc9ca686251f4e9a39ffc142af76c19d56] ...
	I0429 14:32:41.344190 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e369fa475b66d2f4ad15aa1fd54695bc9ca686251f4e9a39ffc142af76c19d56"
	I0429 14:32:41.384395 1960604 logs.go:123] Gathering logs for CRI-O ...
	I0429 14:32:41.384426 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 14:32:41.455563 1960604 logs.go:123] Gathering logs for etcd [5d59422af5b586e4f9b6daf7e9e52e83676a91539bb628653c2bebbd1a53da93] ...
	I0429 14:32:41.455600 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d59422af5b586e4f9b6daf7e9e52e83676a91539bb628653c2bebbd1a53da93"
	I0429 14:32:41.509348 1960604 logs.go:123] Gathering logs for etcd [d35e835859cc855613d6afd4c72d21dcfdf659239cbe2a38ede0431ba2200ef1] ...
	I0429 14:32:41.509384 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d35e835859cc855613d6afd4c72d21dcfdf659239cbe2a38ede0431ba2200ef1"
	I0429 14:32:41.563234 1960604 logs.go:123] Gathering logs for kube-proxy [c32d11510ebd6c0b5c2b9c6ab8115ed2148c7fe281af388d43e51b3ff30dbdc4] ...
	I0429 14:32:41.563268 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c32d11510ebd6c0b5c2b9c6ab8115ed2148c7fe281af388d43e51b3ff30dbdc4"
	I0429 14:32:41.607043 1960604 logs.go:123] Gathering logs for kube-controller-manager [62b1729ce58be8c6d93e056736e47f933b97b0e10292ce383dce3a05fb88556d] ...
	I0429 14:32:41.607074 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62b1729ce58be8c6d93e056736e47f933b97b0e10292ce383dce3a05fb88556d"
	I0429 14:32:41.669636 1960604 logs.go:123] Gathering logs for describe nodes ...
	I0429 14:32:41.669669 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 14:32:41.961849 1960604 logs.go:123] Gathering logs for kube-apiserver [d289e4bc5242b0e6e9ec798d1ee21ab39fa2dd99d88a0d8e682ffb72b5abd324] ...
	I0429 14:32:41.961881 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d289e4bc5242b0e6e9ec798d1ee21ab39fa2dd99d88a0d8e682ffb72b5abd324"
	I0429 14:32:42.031634 1960604 logs.go:123] Gathering logs for kube-controller-manager [169574240cfc84c921ac61f6f654e6969bf336051b3e17400e4d8f6b1e017f00] ...
	I0429 14:32:42.031666 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169574240cfc84c921ac61f6f654e6969bf336051b3e17400e4d8f6b1e017f00"
	I0429 14:32:44.599486 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0429 14:32:44.599514 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:44.599524 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:44.599528 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:44.609953 1960604 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 14:32:44.629621 1960604 system_pods.go:59] 26 kube-system pods found
	I0429 14:32:44.629672 1960604 system_pods.go:61] "coredns-7db6d8ff4d-9nqsr" [03cf70a1-960e-4ac9-bb97-ed66df6d64aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 14:32:44.629685 1960604 system_pods.go:61] "coredns-7db6d8ff4d-qvn8n" [beb4584d-d360-46b9-b0c6-8d884ec2616a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 14:32:44.629692 1960604 system_pods.go:61] "etcd-ha-581657" [1b9fcd37-156b-44e3-b823-f150a6f39816] Running
	I0429 14:32:44.629698 1960604 system_pods.go:61] "etcd-ha-581657-m02" [c10c9bb8-3d42-453c-9021-0ed5c78fd966] Running
	I0429 14:32:44.629709 1960604 system_pods.go:61] "etcd-ha-581657-m03" [01549949-ea16-4db3-8a81-e4ac360d74d0] Running
	I0429 14:32:44.629714 1960604 system_pods.go:61] "kindnet-7prmx" [0c276e59-6c3f-4cfc-a08f-138893376155] Running
	I0429 14:32:44.629730 1960604 system_pods.go:61] "kindnet-9sxl7" [8109d1c4-4861-472e-8593-80f0696ca815] Running
	I0429 14:32:44.629747 1960604 system_pods.go:61] "kindnet-xp94m" [843d89f0-92d1-4bfe-88e6-9e4dae85bd14] Running
	I0429 14:32:44.629751 1960604 system_pods.go:61] "kindnet-z64kr" [86daf1b5-b86c-4ac0-ab1b-d2be29b148b5] Running
	I0429 14:32:44.629757 1960604 system_pods.go:61] "kube-apiserver-ha-581657" [1f901d0b-e367-479e-8705-c320c75a407c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 14:32:44.629766 1960604 system_pods.go:61] "kube-apiserver-ha-581657-m02" [10187a13-4157-4d66-91e6-fd51a6ea37fb] Running
	I0429 14:32:44.629771 1960604 system_pods.go:61] "kube-apiserver-ha-581657-m03" [c80318b6-8da5-4905-b152-9d7c2cb10b79] Running
	I0429 14:32:44.629782 1960604 system_pods.go:61] "kube-controller-manager-ha-581657" [ac59bfd0-3e35-4b91-9b4c-dedb855a72fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 14:32:44.629793 1960604 system_pods.go:61] "kube-controller-manager-ha-581657-m02" [64d6c96a-de2f-482a-9456-bd31e3d59ac3] Running
	I0429 14:32:44.629798 1960604 system_pods.go:61] "kube-controller-manager-ha-581657-m03" [69dd61ad-7479-4c16-8d3e-baed77f871f9] Running
	I0429 14:32:44.629802 1960604 system_pods.go:61] "kube-proxy-6hktv" [55a04e01-5b34-469a-9a29-a4fd653a999b] Running
	I0429 14:32:44.629806 1960604 system_pods.go:61] "kube-proxy-d8t8s" [c7d0d752-ed63-4933-afe0-f1b8e6d7f61c] Running
	I0429 14:32:44.629814 1960604 system_pods.go:61] "kube-proxy-hshwx" [68841dcb-9f30-44a7-b6dd-68b799bb3431] Running
	I0429 14:32:44.629818 1960604 system_pods.go:61] "kube-proxy-zhbtq" [7b5a776e-6855-4ed4-89a0-87c18eb5b171] Running
	I0429 14:32:44.629824 1960604 system_pods.go:61] "kube-scheduler-ha-581657" [071b1db3-9936-4ead-952b-e73e08d0b46a] Running
	I0429 14:32:44.629830 1960604 system_pods.go:61] "kube-scheduler-ha-581657-m02" [359208d2-4621-4635-a887-d83883f52ca1] Running
	I0429 14:32:44.629837 1960604 system_pods.go:61] "kube-scheduler-ha-581657-m03" [5fc3c01f-4517-4c0e-a481-12525d4fb55f] Running
	I0429 14:32:44.629842 1960604 system_pods.go:61] "kube-vip-ha-581657" [2ab777b7-d05e-414d-8937-4ca4b59b4cfe] Running
	I0429 14:32:44.629846 1960604 system_pods.go:61] "kube-vip-ha-581657-m02" [27eb0f8c-b6ec-4c6a-a3dc-5d049e040a97] Running
	I0429 14:32:44.629856 1960604 system_pods.go:61] "kube-vip-ha-581657-m03" [e10b8e01-c7c9-4faa-aba0-0896aceb9cf6] Running
	I0429 14:32:44.629860 1960604 system_pods.go:61] "storage-provisioner" [d5de88c7-890e-4504-af85-139980146047] Running
	I0429 14:32:44.629867 1960604 system_pods.go:74] duration metric: took 3.899472556s to wait for pod list to return data ...
	I0429 14:32:44.629882 1960604 default_sa.go:34] waiting for default service account to be created ...
	I0429 14:32:44.629970 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0429 14:32:44.629982 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:44.629997 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:44.630004 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:44.633022 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:32:44.633270 1960604 default_sa.go:45] found service account: "default"
	I0429 14:32:44.633291 1960604 default_sa.go:55] duration metric: took 3.401822ms for default service account to be created ...
	I0429 14:32:44.633309 1960604 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 14:32:44.633367 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0429 14:32:44.633375 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:44.633383 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:44.633386 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:44.640437 1960604 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 14:32:44.649700 1960604 system_pods.go:86] 26 kube-system pods found
	I0429 14:32:44.649743 1960604 system_pods.go:89] "coredns-7db6d8ff4d-9nqsr" [03cf70a1-960e-4ac9-bb97-ed66df6d64aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 14:32:44.649754 1960604 system_pods.go:89] "coredns-7db6d8ff4d-qvn8n" [beb4584d-d360-46b9-b0c6-8d884ec2616a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 14:32:44.649780 1960604 system_pods.go:89] "etcd-ha-581657" [1b9fcd37-156b-44e3-b823-f150a6f39816] Running
	I0429 14:32:44.649794 1960604 system_pods.go:89] "etcd-ha-581657-m02" [c10c9bb8-3d42-453c-9021-0ed5c78fd966] Running
	I0429 14:32:44.649799 1960604 system_pods.go:89] "etcd-ha-581657-m03" [01549949-ea16-4db3-8a81-e4ac360d74d0] Running
	I0429 14:32:44.649804 1960604 system_pods.go:89] "kindnet-7prmx" [0c276e59-6c3f-4cfc-a08f-138893376155] Running
	I0429 14:32:44.649810 1960604 system_pods.go:89] "kindnet-9sxl7" [8109d1c4-4861-472e-8593-80f0696ca815] Running
	I0429 14:32:44.649815 1960604 system_pods.go:89] "kindnet-xp94m" [843d89f0-92d1-4bfe-88e6-9e4dae85bd14] Running
	I0429 14:32:44.649822 1960604 system_pods.go:89] "kindnet-z64kr" [86daf1b5-b86c-4ac0-ab1b-d2be29b148b5] Running
	I0429 14:32:44.649831 1960604 system_pods.go:89] "kube-apiserver-ha-581657" [1f901d0b-e367-479e-8705-c320c75a407c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 14:32:44.649838 1960604 system_pods.go:89] "kube-apiserver-ha-581657-m02" [10187a13-4157-4d66-91e6-fd51a6ea37fb] Running
	I0429 14:32:44.649857 1960604 system_pods.go:89] "kube-apiserver-ha-581657-m03" [c80318b6-8da5-4905-b152-9d7c2cb10b79] Running
	I0429 14:32:44.649873 1960604 system_pods.go:89] "kube-controller-manager-ha-581657" [ac59bfd0-3e35-4b91-9b4c-dedb855a72fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 14:32:44.649889 1960604 system_pods.go:89] "kube-controller-manager-ha-581657-m02" [64d6c96a-de2f-482a-9456-bd31e3d59ac3] Running
	I0429 14:32:44.649901 1960604 system_pods.go:89] "kube-controller-manager-ha-581657-m03" [69dd61ad-7479-4c16-8d3e-baed77f871f9] Running
	I0429 14:32:44.649906 1960604 system_pods.go:89] "kube-proxy-6hktv" [55a04e01-5b34-469a-9a29-a4fd653a999b] Running
	I0429 14:32:44.649911 1960604 system_pods.go:89] "kube-proxy-d8t8s" [c7d0d752-ed63-4933-afe0-f1b8e6d7f61c] Running
	I0429 14:32:44.649917 1960604 system_pods.go:89] "kube-proxy-hshwx" [68841dcb-9f30-44a7-b6dd-68b799bb3431] Running
	I0429 14:32:44.649922 1960604 system_pods.go:89] "kube-proxy-zhbtq" [7b5a776e-6855-4ed4-89a0-87c18eb5b171] Running
	I0429 14:32:44.649926 1960604 system_pods.go:89] "kube-scheduler-ha-581657" [071b1db3-9936-4ead-952b-e73e08d0b46a] Running
	I0429 14:32:44.649933 1960604 system_pods.go:89] "kube-scheduler-ha-581657-m02" [359208d2-4621-4635-a887-d83883f52ca1] Running
	I0429 14:32:44.649937 1960604 system_pods.go:89] "kube-scheduler-ha-581657-m03" [5fc3c01f-4517-4c0e-a481-12525d4fb55f] Running
	I0429 14:32:44.649944 1960604 system_pods.go:89] "kube-vip-ha-581657" [2ab777b7-d05e-414d-8937-4ca4b59b4cfe] Running
	I0429 14:32:44.649947 1960604 system_pods.go:89] "kube-vip-ha-581657-m02" [27eb0f8c-b6ec-4c6a-a3dc-5d049e040a97] Running
	I0429 14:32:44.649951 1960604 system_pods.go:89] "kube-vip-ha-581657-m03" [e10b8e01-c7c9-4faa-aba0-0896aceb9cf6] Running
	I0429 14:32:44.649962 1960604 system_pods.go:89] "storage-provisioner" [d5de88c7-890e-4504-af85-139980146047] Running
	I0429 14:32:44.649969 1960604 system_pods.go:126] duration metric: took 16.654585ms to wait for k8s-apps to be running ...
	I0429 14:32:44.649977 1960604 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 14:32:44.650041 1960604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 14:32:44.663958 1960604 system_svc.go:56] duration metric: took 13.97135ms WaitForService to wait for kubelet
	I0429 14:32:44.663991 1960604 kubeadm.go:576] duration metric: took 1m12.687690386s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 14:32:44.664011 1960604 node_conditions.go:102] verifying NodePressure condition ...
	I0429 14:32:44.664091 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0429 14:32:44.664101 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:44.664110 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:44.664115 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:44.667495 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:32:44.670034 1960604 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0429 14:32:44.670101 1960604 node_conditions.go:123] node cpu capacity is 2
	I0429 14:32:44.670134 1960604 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0429 14:32:44.670154 1960604 node_conditions.go:123] node cpu capacity is 2
	I0429 14:32:44.670182 1960604 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0429 14:32:44.670206 1960604 node_conditions.go:123] node cpu capacity is 2
	I0429 14:32:44.670228 1960604 node_conditions.go:105] duration metric: took 6.210039ms to run NodePressure ...
	I0429 14:32:44.670259 1960604 start.go:240] waiting for startup goroutines ...
	I0429 14:32:44.670308 1960604 start.go:254] writing updated cluster config ...
	I0429 14:32:44.673134 1960604 out.go:177] 
	I0429 14:32:44.675491 1960604 config.go:182] Loaded profile config "ha-581657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:32:44.675664 1960604 profile.go:143] Saving config to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/config.json ...
	I0429 14:32:44.678466 1960604 out.go:177] * Starting "ha-581657-m04" worker node in "ha-581657" cluster
	I0429 14:32:44.681476 1960604 cache.go:121] Beginning downloading kic base image for docker with crio
	I0429 14:32:44.683766 1960604 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 14:32:44.686000 1960604 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 14:32:44.686033 1960604 cache.go:56] Caching tarball of preloaded images
	I0429 14:32:44.686077 1960604 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 14:32:44.686148 1960604 preload.go:173] Found /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0429 14:32:44.686159 1960604 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 14:32:44.686296 1960604 profile.go:143] Saving config to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/config.json ...
	I0429 14:32:44.700354 1960604 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 14:32:44.700530 1960604 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 14:32:44.700558 1960604 cache.go:194] Successfully downloaded all kic artifacts
	I0429 14:32:44.700588 1960604 start.go:360] acquireMachinesLock for ha-581657-m04: {Name:mk7e72610ebfed129e21b2f58abea67586de43f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 14:32:44.700655 1960604 start.go:364] duration metric: took 42.552µs to acquireMachinesLock for "ha-581657-m04"
	I0429 14:32:44.700703 1960604 start.go:96] Skipping create...Using existing machine configuration
	I0429 14:32:44.700710 1960604 fix.go:54] fixHost starting: m04
	I0429 14:32:44.700968 1960604 cli_runner.go:164] Run: docker container inspect ha-581657-m04 --format={{.State.Status}}
	I0429 14:32:44.717562 1960604 fix.go:112] recreateIfNeeded on ha-581657-m04: state=Stopped err=<nil>
	W0429 14:32:44.717597 1960604 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 14:32:44.720389 1960604 out.go:177] * Restarting existing docker container for "ha-581657-m04" ...
	I0429 14:32:44.722761 1960604 cli_runner.go:164] Run: docker start ha-581657-m04
	I0429 14:32:45.078589 1960604 cli_runner.go:164] Run: docker container inspect ha-581657-m04 --format={{.State.Status}}
	I0429 14:32:45.106750 1960604 kic.go:430] container "ha-581657-m04" state is running.
	I0429 14:32:45.107469 1960604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-581657-m04
	I0429 14:32:45.135372 1960604 profile.go:143] Saving config to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/config.json ...
	I0429 14:32:45.135672 1960604 machine.go:94] provisionDockerMachine start ...
	I0429 14:32:45.135872 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657-m04
	I0429 14:32:45.168041 1960604 main.go:141] libmachine: Using SSH client type: native
	I0429 14:32:45.168319 1960604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35112 <nil> <nil>}
	I0429 14:32:45.168338 1960604 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 14:32:45.170060 1960604 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0429 14:32:48.316769 1960604 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-581657-m04
	
	I0429 14:32:48.316856 1960604 ubuntu.go:169] provisioning hostname "ha-581657-m04"
	I0429 14:32:48.316951 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657-m04
	I0429 14:32:48.352064 1960604 main.go:141] libmachine: Using SSH client type: native
	I0429 14:32:48.352301 1960604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35112 <nil> <nil>}
	I0429 14:32:48.352312 1960604 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-581657-m04 && echo "ha-581657-m04" | sudo tee /etc/hostname
	I0429 14:32:48.503330 1960604 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-581657-m04
	
	I0429 14:32:48.503544 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657-m04
	I0429 14:32:48.535312 1960604 main.go:141] libmachine: Using SSH client type: native
	I0429 14:32:48.535558 1960604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35112 <nil> <nil>}
	I0429 14:32:48.535575 1960604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-581657-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-581657-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-581657-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 14:32:48.681351 1960604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 14:32:48.681472 1960604 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18771-1897267/.minikube CaCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18771-1897267/.minikube}
	I0429 14:32:48.681524 1960604 ubuntu.go:177] setting up certificates
	I0429 14:32:48.681560 1960604 provision.go:84] configureAuth start
	I0429 14:32:48.681670 1960604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-581657-m04
	I0429 14:32:48.721157 1960604 provision.go:143] copyHostCerts
	I0429 14:32:48.721210 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem
	I0429 14:32:48.721253 1960604 exec_runner.go:144] found /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem, removing ...
	I0429 14:32:48.721260 1960604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem
	I0429 14:32:48.721437 1960604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem (1078 bytes)
	I0429 14:32:48.721574 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem
	I0429 14:32:48.721597 1960604 exec_runner.go:144] found /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem, removing ...
	I0429 14:32:48.721602 1960604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem
	I0429 14:32:48.721652 1960604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem (1123 bytes)
	I0429 14:32:48.721701 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem
	I0429 14:32:48.721721 1960604 exec_runner.go:144] found /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem, removing ...
	I0429 14:32:48.721726 1960604 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem
	I0429 14:32:48.721770 1960604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem (1679 bytes)
	I0429 14:32:48.721844 1960604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem org=jenkins.ha-581657-m04 san=[127.0.0.1 192.168.49.5 ha-581657-m04 localhost minikube]
	I0429 14:32:49.057586 1960604 provision.go:177] copyRemoteCerts
	I0429 14:32:49.057713 1960604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 14:32:49.057792 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657-m04
	I0429 14:32:49.087394 1960604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35112 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/ha-581657-m04/id_rsa Username:docker}
	I0429 14:32:49.186681 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 14:32:49.186743 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 14:32:49.229151 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 14:32:49.229220 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 14:32:49.261265 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 14:32:49.261370 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 14:32:49.293409 1960604 provision.go:87] duration metric: took 611.810608ms to configureAuth
	I0429 14:32:49.293491 1960604 ubuntu.go:193] setting minikube options for container-runtime
	I0429 14:32:49.293765 1960604 config.go:182] Loaded profile config "ha-581657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:32:49.293927 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657-m04
	I0429 14:32:49.318325 1960604 main.go:141] libmachine: Using SSH client type: native
	I0429 14:32:49.318573 1960604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35112 <nil> <nil>}
	I0429 14:32:49.318588 1960604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 14:32:49.622505 1960604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 14:32:49.622576 1960604 machine.go:97] duration metric: took 4.486892396s to provisionDockerMachine
	I0429 14:32:49.622620 1960604 start.go:293] postStartSetup for "ha-581657-m04" (driver="docker")
	I0429 14:32:49.622658 1960604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 14:32:49.622758 1960604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 14:32:49.622834 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657-m04
	I0429 14:32:49.662529 1960604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35112 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/ha-581657-m04/id_rsa Username:docker}
	I0429 14:32:49.770696 1960604 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 14:32:49.774730 1960604 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0429 14:32:49.774762 1960604 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0429 14:32:49.774773 1960604 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0429 14:32:49.774779 1960604 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0429 14:32:49.774790 1960604 filesync.go:126] Scanning /home/jenkins/minikube-integration/18771-1897267/.minikube/addons for local assets ...
	I0429 14:32:49.774843 1960604 filesync.go:126] Scanning /home/jenkins/minikube-integration/18771-1897267/.minikube/files for local assets ...
	I0429 14:32:49.774921 1960604 filesync.go:149] local asset: /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem -> 19026842.pem in /etc/ssl/certs
	I0429 14:32:49.774928 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem -> /etc/ssl/certs/19026842.pem
	I0429 14:32:49.775029 1960604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 14:32:49.784445 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem --> /etc/ssl/certs/19026842.pem (1708 bytes)
	I0429 14:32:49.810303 1960604 start.go:296] duration metric: took 187.64286ms for postStartSetup
	I0429 14:32:49.810385 1960604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 14:32:49.810425 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657-m04
	I0429 14:32:49.836111 1960604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35112 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/ha-581657-m04/id_rsa Username:docker}
	I0429 14:32:49.928481 1960604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 14:32:49.934727 1960604 fix.go:56] duration metric: took 5.234009727s for fixHost
	I0429 14:32:49.934749 1960604 start.go:83] releasing machines lock for "ha-581657-m04", held for 5.234086339s
	I0429 14:32:49.934817 1960604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-581657-m04
	I0429 14:32:49.963347 1960604 out.go:177] * Found network options:
	I0429 14:32:49.967246 1960604 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0429 14:32:49.969540 1960604 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 14:32:49.969570 1960604 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 14:32:49.969594 1960604 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 14:32:49.969606 1960604 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 14:32:49.969677 1960604 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 14:32:49.969723 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657-m04
	I0429 14:32:49.969985 1960604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 14:32:49.970039 1960604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657-m04
	I0429 14:32:50.004843 1960604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35112 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/ha-581657-m04/id_rsa Username:docker}
	I0429 14:32:50.006585 1960604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35112 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/ha-581657-m04/id_rsa Username:docker}
	I0429 14:32:50.297921 1960604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 14:32:50.303557 1960604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 14:32:50.314064 1960604 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0429 14:32:50.314149 1960604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 14:32:50.328530 1960604 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 14:32:50.328556 1960604 start.go:494] detecting cgroup driver to use...
	I0429 14:32:50.328589 1960604 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0429 14:32:50.328637 1960604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 14:32:50.345949 1960604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 14:32:50.359600 1960604 docker.go:217] disabling cri-docker service (if available) ...
	I0429 14:32:50.359666 1960604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 14:32:50.375302 1960604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 14:32:50.389357 1960604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 14:32:50.490165 1960604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 14:32:50.602930 1960604 docker.go:233] disabling docker service ...
	I0429 14:32:50.603007 1960604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 14:32:50.619510 1960604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 14:32:50.632356 1960604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 14:32:50.726721 1960604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 14:32:50.829186 1960604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 14:32:50.843373 1960604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 14:32:50.861764 1960604 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 14:32:50.861886 1960604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:32:50.877197 1960604 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 14:32:50.877301 1960604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:32:50.888396 1960604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:32:50.898785 1960604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:32:50.908938 1960604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 14:32:50.923141 1960604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:32:50.935602 1960604 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:32:50.946401 1960604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:32:50.958440 1960604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 14:32:50.967332 1960604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 14:32:50.976325 1960604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 14:32:51.082163 1960604 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 14:32:51.215304 1960604 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 14:32:51.215419 1960604 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 14:32:51.219494 1960604 start.go:562] Will wait 60s for crictl version
	I0429 14:32:51.219598 1960604 ssh_runner.go:195] Run: which crictl
	I0429 14:32:51.223887 1960604 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 14:32:51.272101 1960604 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0429 14:32:51.272238 1960604 ssh_runner.go:195] Run: crio --version
	I0429 14:32:51.313723 1960604 ssh_runner.go:195] Run: crio --version
	I0429 14:32:51.376922 1960604 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.24.6 ...
	I0429 14:32:51.379150 1960604 out.go:177]   - env NO_PROXY=192.168.49.2
	I0429 14:32:51.381529 1960604 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0429 14:32:51.383866 1960604 cli_runner.go:164] Run: docker network inspect ha-581657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 14:32:51.399015 1960604 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0429 14:32:51.403199 1960604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 14:32:51.414253 1960604 mustload.go:65] Loading cluster: ha-581657
	I0429 14:32:51.414487 1960604 config.go:182] Loaded profile config "ha-581657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:32:51.414738 1960604 cli_runner.go:164] Run: docker container inspect ha-581657 --format={{.State.Status}}
	I0429 14:32:51.434030 1960604 host.go:66] Checking if "ha-581657" exists ...
	I0429 14:32:51.434362 1960604 certs.go:68] Setting up /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657 for IP: 192.168.49.5
	I0429 14:32:51.434376 1960604 certs.go:194] generating shared ca certs ...
	I0429 14:32:51.434391 1960604 certs.go:226] acquiring lock for ca certs: {Name:mk012c6865f9f1625b7bfd5d0280b6707793520e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:32:51.434516 1960604 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key
	I0429 14:32:51.434562 1960604 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key
	I0429 14:32:51.434577 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 14:32:51.434590 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 14:32:51.434610 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 14:32:51.434624 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 14:32:51.434681 1960604 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684.pem (1338 bytes)
	W0429 14:32:51.434713 1960604 certs.go:480] ignoring /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684_empty.pem, impossibly tiny 0 bytes
	I0429 14:32:51.434723 1960604 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 14:32:51.434752 1960604 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem (1078 bytes)
	I0429 14:32:51.434778 1960604 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem (1123 bytes)
	I0429 14:32:51.434803 1960604 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem (1679 bytes)
	I0429 14:32:51.434849 1960604 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem (1708 bytes)
	I0429 14:32:51.434882 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem -> /usr/share/ca-certificates/19026842.pem
	I0429 14:32:51.434899 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:32:51.434911 1960604 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684.pem -> /usr/share/ca-certificates/1902684.pem
	I0429 14:32:51.434932 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 14:32:51.479024 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 14:32:51.506716 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 14:32:51.534250 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 14:32:51.562634 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem --> /usr/share/ca-certificates/19026842.pem (1708 bytes)
	I0429 14:32:51.591321 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 14:32:51.615801 1960604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684.pem --> /usr/share/ca-certificates/1902684.pem (1338 bytes)
	I0429 14:32:51.641289 1960604 ssh_runner.go:195] Run: openssl version
	I0429 14:32:51.647018 1960604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19026842.pem && ln -fs /usr/share/ca-certificates/19026842.pem /etc/ssl/certs/19026842.pem"
	I0429 14:32:51.657326 1960604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19026842.pem
	I0429 14:32:51.662075 1960604 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 14:18 /usr/share/ca-certificates/19026842.pem
	I0429 14:32:51.662143 1960604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19026842.pem
	I0429 14:32:51.669302 1960604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19026842.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 14:32:51.678392 1960604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 14:32:51.687928 1960604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:32:51.691390 1960604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 14:07 /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:32:51.691475 1960604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:32:51.698327 1960604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 14:32:51.707436 1960604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1902684.pem && ln -fs /usr/share/ca-certificates/1902684.pem /etc/ssl/certs/1902684.pem"
	I0429 14:32:51.717402 1960604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1902684.pem
	I0429 14:32:51.721173 1960604 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 14:18 /usr/share/ca-certificates/1902684.pem
	I0429 14:32:51.721241 1960604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1902684.pem
	I0429 14:32:51.728283 1960604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1902684.pem /etc/ssl/certs/51391683.0"
	I0429 14:32:51.737627 1960604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 14:32:51.740990 1960604 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 14:32:51.741034 1960604 kubeadm.go:928] updating node {m04 192.168.49.5 0 v1.30.0  false true} ...
	I0429 14:32:51.741113 1960604 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-581657-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-581657 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 14:32:51.741179 1960604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 14:32:51.750704 1960604 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 14:32:51.750823 1960604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0429 14:32:51.760184 1960604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0429 14:32:51.781275 1960604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 14:32:51.799747 1960604 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0429 14:32:51.803618 1960604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 14:32:51.815453 1960604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 14:32:51.912165 1960604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 14:32:51.924260 1960604 start.go:234] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0429 14:32:51.928192 1960604 out.go:177] * Verifying Kubernetes components...
	I0429 14:32:51.924597 1960604 config.go:182] Loaded profile config "ha-581657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:32:51.930408 1960604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 14:32:52.026433 1960604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 14:32:52.040111 1960604 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18771-1897267/kubeconfig
	I0429 14:32:52.040500 1960604 kapi.go:59] client config for ha-581657: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/client.crt", KeyFile:"/home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/ha-581657/client.key", CAFile:"/home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17a1740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0429 14:32:52.040593 1960604 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0429 14:32:52.040906 1960604 node_ready.go:35] waiting up to 6m0s for node "ha-581657-m04" to be "Ready" ...
	I0429 14:32:52.041009 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m04
	I0429 14:32:52.041018 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:52.041026 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:52.041032 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:52.043820 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:32:52.541618 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m04
	I0429 14:32:52.541644 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:52.541657 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:52.541661 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:52.544532 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:32:53.041152 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m04
	I0429 14:32:53.041179 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:53.041192 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:53.041197 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:53.043954 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:32:53.541162 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m04
	I0429 14:32:53.541188 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:53.541199 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:53.541204 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:53.544006 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:32:54.041283 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m04
	I0429 14:32:54.041310 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:54.041320 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:54.041324 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:54.044591 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:32:54.045313 1960604 node_ready.go:53] node "ha-581657-m04" has status "Ready":"Unknown"
	I0429 14:32:54.541141 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m04
	I0429 14:32:54.541165 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:54.541174 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:54.541178 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:54.544129 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:32:55.041917 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m04
	I0429 14:32:55.041942 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:55.041953 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:55.041959 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:55.045323 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:32:55.541116 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m04
	I0429 14:32:55.541140 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:55.541156 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:55.541159 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:55.544915 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:32:56.041131 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m04
	I0429 14:32:56.041155 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:56.041163 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:56.041167 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:56.044215 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:32:56.541220 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m04
	I0429 14:32:56.541243 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:56.541253 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:56.541257 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:56.543989 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:32:56.544780 1960604 node_ready.go:53] node "ha-581657-m04" has status "Ready":"Unknown"
	I0429 14:32:57.041218 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m04
	I0429 14:32:57.041245 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:57.041254 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:57.041259 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:57.044191 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:32:57.541697 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m04
	I0429 14:32:57.541722 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:57.541731 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:57.541735 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:57.544463 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:32:58.042036 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m04
	I0429 14:32:58.042061 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:58.042070 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:58.042075 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:58.044982 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:32:58.542093 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m04
	I0429 14:32:58.542119 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:58.542129 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:58.542134 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:58.545029 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:32:58.545770 1960604 node_ready.go:53] node "ha-581657-m04" has status "Ready":"Unknown"
	I0429 14:32:59.041216 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m04
	I0429 14:32:59.041237 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:59.041246 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:59.041252 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:59.043888 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:32:59.044475 1960604 node_ready.go:49] node "ha-581657-m04" has status "Ready":"True"
	I0429 14:32:59.044492 1960604 node_ready.go:38] duration metric: took 7.003563642s for node "ha-581657-m04" to be "Ready" ...
	I0429 14:32:59.044502 1960604 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 14:32:59.044562 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0429 14:32:59.044569 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:59.044576 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:59.044582 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:59.049908 1960604 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 14:32:59.056407 1960604 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9nqsr" in "kube-system" namespace to be "Ready" ...
	I0429 14:32:59.056606 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:32:59.056635 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:59.056651 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:59.056655 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:59.059444 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:32:59.060203 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:32:59.060221 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:59.060230 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:59.060234 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:59.062492 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:32:59.557282 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:32:59.557314 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:59.557323 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:59.557328 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:59.560590 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:32:59.561347 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:32:59.561367 1960604 round_trippers.go:469] Request Headers:
	I0429 14:32:59.561391 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:32:59.561400 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:32:59.563950 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:00.057290 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:00.057323 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:00.057346 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:00.057353 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:00.098309 1960604 round_trippers.go:574] Response Status: 200 OK in 40 milliseconds
	I0429 14:33:00.099088 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:00.099103 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:00.099111 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:00.099117 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:00.113092 1960604 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0429 14:33:00.556648 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:00.556695 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:00.556705 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:00.556712 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:00.561674 1960604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 14:33:00.562448 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:00.562484 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:00.562495 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:00.562501 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:00.567866 1960604 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 14:33:01.057369 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:01.057388 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:01.057397 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:01.057400 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:01.073800 1960604 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0429 14:33:01.076692 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:01.076754 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:01.076778 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:01.076796 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:01.084584 1960604 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 14:33:01.085657 1960604 pod_ready.go:102] pod "coredns-7db6d8ff4d-9nqsr" in "kube-system" namespace has status "Ready":"False"
	I0429 14:33:01.557058 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:01.557079 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:01.557089 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:01.557105 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:01.565615 1960604 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 14:33:01.566841 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:01.566860 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:01.566869 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:01.566874 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:01.569566 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:02.057568 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:02.057593 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:02.057602 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:02.057606 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:02.060545 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:02.061345 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:02.061364 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:02.061373 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:02.061378 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:02.063971 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:02.556698 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:02.556721 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:02.556731 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:02.556734 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:02.562785 1960604 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 14:33:02.563615 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:02.563637 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:02.563646 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:02.563651 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:02.566280 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:03.057181 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:03.057208 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:03.057218 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:03.057224 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:03.060493 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:33:03.061360 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:03.061389 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:03.061401 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:03.061409 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:03.064427 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:03.557439 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:03.557464 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:03.557473 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:03.557477 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:03.561453 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:33:03.562504 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:03.562526 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:03.562536 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:03.562545 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:03.565173 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:03.566854 1960604 pod_ready.go:102] pod "coredns-7db6d8ff4d-9nqsr" in "kube-system" namespace has status "Ready":"False"
	I0429 14:33:04.056581 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:04.056609 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:04.056619 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:04.056623 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:04.059919 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:33:04.060847 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:04.060872 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:04.060881 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:04.060886 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:04.063701 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:04.557584 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:04.557605 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:04.557614 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:04.557618 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:04.563651 1960604 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 14:33:04.564426 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:04.564438 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:04.564446 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:04.564458 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:04.568449 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:33:05.056607 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:05.056633 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:05.056643 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:05.056647 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:05.059621 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:05.060250 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:05.060261 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:05.060269 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:05.060274 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:05.063033 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:05.557278 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:05.557311 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:05.557338 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:05.557355 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:05.567536 1960604 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0429 14:33:05.568366 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:05.568385 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:05.568394 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:05.568399 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:05.571217 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:05.571871 1960604 pod_ready.go:102] pod "coredns-7db6d8ff4d-9nqsr" in "kube-system" namespace has status "Ready":"False"
	I0429 14:33:06.056711 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:06.056740 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:06.056751 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:06.056758 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:06.059887 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:33:06.060925 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:06.060948 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:06.060958 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:06.060962 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:06.063717 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:06.557233 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:06.557254 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:06.557262 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:06.557267 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:06.561576 1960604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 14:33:06.562454 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:06.562478 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:06.562488 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:06.562492 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:06.565195 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:07.057659 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:07.057686 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:07.057695 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:07.057699 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:07.060557 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:07.061289 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:07.061309 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:07.061319 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:07.061322 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:07.063988 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:07.557457 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:07.557482 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:07.557492 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:07.557496 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:07.563462 1960604 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 14:33:07.566066 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:07.566091 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:07.566101 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:07.566104 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:07.577778 1960604 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0429 14:33:07.578798 1960604 pod_ready.go:102] pod "coredns-7db6d8ff4d-9nqsr" in "kube-system" namespace has status "Ready":"False"
	I0429 14:33:08.057344 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:08.057370 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:08.057379 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:08.057383 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:08.060340 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:08.061277 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:08.061300 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:08.061311 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:08.061339 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:08.063991 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:08.557379 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:08.557401 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:08.557410 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:08.557416 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:08.565623 1960604 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 14:33:08.566827 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:08.566847 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:08.566856 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:08.566865 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:08.569368 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:09.057242 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:09.057268 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:09.057278 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:09.057284 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:09.060209 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:09.061299 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:09.061322 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:09.061332 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:09.061337 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:09.064276 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:09.556845 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:09.556873 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:09.556883 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:09.556888 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:09.565130 1960604 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0429 14:33:09.566520 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:09.566538 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:09.566548 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:09.566552 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:09.569564 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:10.056809 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:10.056835 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:10.056845 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:10.056849 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:10.059775 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:10.060497 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:10.060520 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:10.060530 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:10.060535 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:10.064429 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:33:10.065096 1960604 pod_ready.go:102] pod "coredns-7db6d8ff4d-9nqsr" in "kube-system" namespace has status "Ready":"False"
	I0429 14:33:10.557553 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:10.557575 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:10.557585 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:10.557590 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:10.563088 1960604 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 14:33:10.563892 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:10.563914 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:10.563923 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:10.563929 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:10.567039 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:33:11.057517 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:11.057545 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:11.057557 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:11.057564 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:11.060712 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:33:11.061648 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:11.061668 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:11.061679 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:11.061684 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:11.064304 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:11.556683 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9nqsr
	I0429 14:33:11.556705 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:11.556713 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:11.556733 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:11.559876 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:33:11.560984 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:11.561001 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:11.561011 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:11.561016 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:11.564685 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:33:11.565268 1960604 pod_ready.go:97] node "ha-581657" hosting pod "coredns-7db6d8ff4d-9nqsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-581657" has status "Ready":"Unknown"
	I0429 14:33:11.565289 1960604 pod_ready.go:81] duration metric: took 12.508851088s for pod "coredns-7db6d8ff4d-9nqsr" in "kube-system" namespace to be "Ready" ...
	E0429 14:33:11.565298 1960604 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-581657" hosting pod "coredns-7db6d8ff4d-9nqsr" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-581657" has status "Ready":"Unknown"
	I0429 14:33:11.565305 1960604 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qvn8n" in "kube-system" namespace to be "Ready" ...
	I0429 14:33:11.565367 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-qvn8n
	I0429 14:33:11.565372 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:11.565383 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:11.565386 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:11.568085 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:11.568854 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:11.568897 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:11.568918 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:11.568937 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:11.571463 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:11.572254 1960604 pod_ready.go:97] node "ha-581657" hosting pod "coredns-7db6d8ff4d-qvn8n" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-581657" has status "Ready":"Unknown"
	I0429 14:33:11.572308 1960604 pod_ready.go:81] duration metric: took 6.995673ms for pod "coredns-7db6d8ff4d-qvn8n" in "kube-system" namespace to be "Ready" ...
	E0429 14:33:11.572333 1960604 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-581657" hosting pod "coredns-7db6d8ff4d-qvn8n" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-581657" has status "Ready":"Unknown"
	I0429 14:33:11.572352 1960604 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-581657" in "kube-system" namespace to be "Ready" ...
	I0429 14:33:11.572441 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-581657
	I0429 14:33:11.572466 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:11.572487 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:11.572508 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:11.574982 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:11.575820 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:11.575835 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:11.575843 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:11.575847 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:11.578930 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:33:11.579617 1960604 pod_ready.go:97] node "ha-581657" hosting pod "etcd-ha-581657" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-581657" has status "Ready":"Unknown"
	I0429 14:33:11.579644 1960604 pod_ready.go:81] duration metric: took 7.259043ms for pod "etcd-ha-581657" in "kube-system" namespace to be "Ready" ...
	E0429 14:33:11.579655 1960604 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-581657" hosting pod "etcd-ha-581657" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-581657" has status "Ready":"Unknown"
	I0429 14:33:11.579666 1960604 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-581657-m02" in "kube-system" namespace to be "Ready" ...
	I0429 14:33:11.579735 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-581657-m02
	I0429 14:33:11.579743 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:11.579751 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:11.579755 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:11.582147 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:11.582925 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:33:11.582945 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:11.582954 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:11.582958 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:11.585443 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:11.586029 1960604 pod_ready.go:92] pod "etcd-ha-581657-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 14:33:11.586050 1960604 pod_ready.go:81] duration metric: took 6.372932ms for pod "etcd-ha-581657-m02" in "kube-system" namespace to be "Ready" ...
	I0429 14:33:11.586070 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-581657" in "kube-system" namespace to be "Ready" ...
	I0429 14:33:11.586133 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-581657
	I0429 14:33:11.586142 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:11.586150 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:11.586154 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:11.588745 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:11.589545 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:11.589560 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:11.589567 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:11.589570 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:11.591907 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:11.592539 1960604 pod_ready.go:97] node "ha-581657" hosting pod "kube-apiserver-ha-581657" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-581657" has status "Ready":"Unknown"
	I0429 14:33:11.592565 1960604 pod_ready.go:81] duration metric: took 6.485048ms for pod "kube-apiserver-ha-581657" in "kube-system" namespace to be "Ready" ...
	E0429 14:33:11.592595 1960604 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-581657" hosting pod "kube-apiserver-ha-581657" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-581657" has status "Ready":"Unknown"
	I0429 14:33:11.592604 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-581657-m02" in "kube-system" namespace to be "Ready" ...
	I0429 14:33:11.756943 1960604 request.go:629] Waited for 164.274779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-581657-m02
	I0429 14:33:11.757008 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-581657-m02
	I0429 14:33:11.757017 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:11.757031 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:11.757038 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:11.759756 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:11.956797 1960604 request.go:629] Waited for 196.253444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:33:11.956853 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:33:11.956859 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:11.956868 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:11.956875 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:11.959481 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:11.960118 1960604 pod_ready.go:92] pod "kube-apiserver-ha-581657-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 14:33:11.960141 1960604 pod_ready.go:81] duration metric: took 367.526227ms for pod "kube-apiserver-ha-581657-m02" in "kube-system" namespace to be "Ready" ...
	I0429 14:33:11.960154 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-581657" in "kube-system" namespace to be "Ready" ...
	I0429 14:33:12.157469 1960604 request.go:629] Waited for 197.178567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-581657
	I0429 14:33:12.157577 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-581657
	I0429 14:33:12.157600 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:12.157637 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:12.157656 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:12.160831 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:33:12.357619 1960604 request.go:629] Waited for 196.036076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:12.357682 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:12.357717 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:12.357730 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:12.357735 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:12.360468 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:12.361053 1960604 pod_ready.go:97] node "ha-581657" hosting pod "kube-controller-manager-ha-581657" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-581657" has status "Ready":"Unknown"
	I0429 14:33:12.361074 1960604 pod_ready.go:81] duration metric: took 400.880836ms for pod "kube-controller-manager-ha-581657" in "kube-system" namespace to be "Ready" ...
	E0429 14:33:12.361085 1960604 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-581657" hosting pod "kube-controller-manager-ha-581657" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-581657" has status "Ready":"Unknown"
	I0429 14:33:12.361093 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-581657-m02" in "kube-system" namespace to be "Ready" ...
	I0429 14:33:12.557469 1960604 request.go:629] Waited for 196.301416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-581657-m02
	I0429 14:33:12.557579 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-581657-m02
	I0429 14:33:12.557645 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:12.557673 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:12.557690 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:12.577818 1960604 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0429 14:33:12.756764 1960604 request.go:629] Waited for 178.143634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:33:12.756832 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:33:12.756855 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:12.756866 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:12.756874 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:12.760898 1960604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 14:33:12.761591 1960604 pod_ready.go:92] pod "kube-controller-manager-ha-581657-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 14:33:12.761612 1960604 pod_ready.go:81] duration metric: took 400.507688ms for pod "kube-controller-manager-ha-581657-m02" in "kube-system" namespace to be "Ready" ...
	I0429 14:33:12.761624 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d8t8s" in "kube-system" namespace to be "Ready" ...
	I0429 14:33:12.957153 1960604 request.go:629] Waited for 195.459129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8t8s
	I0429 14:33:12.957240 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d8t8s
	I0429 14:33:12.957250 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:12.957259 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:12.957264 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:12.959959 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:13.156922 1960604 request.go:629] Waited for 196.129128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:13.157006 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:13.157034 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:13.157046 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:13.157052 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:13.160254 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:33:13.161393 1960604 pod_ready.go:97] node "ha-581657" hosting pod "kube-proxy-d8t8s" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-581657" has status "Ready":"Unknown"
	I0429 14:33:13.161420 1960604 pod_ready.go:81] duration metric: took 399.788041ms for pod "kube-proxy-d8t8s" in "kube-system" namespace to be "Ready" ...
	E0429 14:33:13.161431 1960604 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-581657" hosting pod "kube-proxy-d8t8s" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-581657" has status "Ready":"Unknown"
	I0429 14:33:13.161439 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hshwx" in "kube-system" namespace to be "Ready" ...
	I0429 14:33:13.357443 1960604 request.go:629] Waited for 195.919896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hshwx
	I0429 14:33:13.357527 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hshwx
	I0429 14:33:13.357535 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:13.357554 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:13.357566 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:13.360408 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:13.557457 1960604 request.go:629] Waited for 196.348132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657-m04
	I0429 14:33:13.557632 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m04
	I0429 14:33:13.557643 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:13.557658 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:13.557662 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:13.563110 1960604 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 14:33:13.563730 1960604 pod_ready.go:92] pod "kube-proxy-hshwx" in "kube-system" namespace has status "Ready":"True"
	I0429 14:33:13.563787 1960604 pod_ready.go:81] duration metric: took 402.339675ms for pod "kube-proxy-hshwx" in "kube-system" namespace to be "Ready" ...
	I0429 14:33:13.563817 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zhbtq" in "kube-system" namespace to be "Ready" ...
	I0429 14:33:13.757598 1960604 request.go:629] Waited for 193.677337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhbtq
	I0429 14:33:13.757676 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zhbtq
	I0429 14:33:13.757683 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:13.757691 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:13.757702 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:13.760594 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:13.956904 1960604 request.go:629] Waited for 195.172895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:33:13.957039 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:33:13.957070 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:13.957089 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:13.957100 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:13.959781 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:13.960338 1960604 pod_ready.go:92] pod "kube-proxy-zhbtq" in "kube-system" namespace has status "Ready":"True"
	I0429 14:33:13.960359 1960604 pod_ready.go:81] duration metric: took 396.522734ms for pod "kube-proxy-zhbtq" in "kube-system" namespace to be "Ready" ...
	I0429 14:33:13.960371 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-581657" in "kube-system" namespace to be "Ready" ...
	I0429 14:33:14.157454 1960604 request.go:629] Waited for 197.017253ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-581657
	I0429 14:33:14.157567 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-581657
	I0429 14:33:14.157611 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:14.157626 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:14.157631 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:14.160474 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:14.357376 1960604 request.go:629] Waited for 196.342056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:14.357452 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657
	I0429 14:33:14.357460 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:14.357469 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:14.357479 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:14.360169 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:14.360968 1960604 pod_ready.go:97] node "ha-581657" hosting pod "kube-scheduler-ha-581657" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-581657" has status "Ready":"Unknown"
	I0429 14:33:14.360992 1960604 pod_ready.go:81] duration metric: took 400.613715ms for pod "kube-scheduler-ha-581657" in "kube-system" namespace to be "Ready" ...
	E0429 14:33:14.361019 1960604 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-581657" hosting pod "kube-scheduler-ha-581657" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-581657" has status "Ready":"Unknown"
	I0429 14:33:14.361033 1960604 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-581657-m02" in "kube-system" namespace to be "Ready" ...
	I0429 14:33:14.556875 1960604 request.go:629] Waited for 195.778803ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-581657-m02
	I0429 14:33:14.557031 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-581657-m02
	I0429 14:33:14.557063 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:14.557087 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:14.557107 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:14.560507 1960604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 14:33:14.757379 1960604 request.go:629] Waited for 196.170288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:33:14.757466 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-581657-m02
	I0429 14:33:14.757520 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:14.757530 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:14.757536 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:14.760213 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:14.761004 1960604 pod_ready.go:92] pod "kube-scheduler-ha-581657-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 14:33:14.761027 1960604 pod_ready.go:81] duration metric: took 399.986379ms for pod "kube-scheduler-ha-581657-m02" in "kube-system" namespace to be "Ready" ...
	I0429 14:33:14.761041 1960604 pod_ready.go:38] duration metric: took 15.716528616s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 14:33:14.761056 1960604 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 14:33:14.761121 1960604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 14:33:14.772615 1960604 system_svc.go:56] duration metric: took 11.551772ms WaitForService to wait for kubelet
	I0429 14:33:14.772646 1960604 kubeadm.go:576] duration metric: took 22.848000986s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 14:33:14.772685 1960604 node_conditions.go:102] verifying NodePressure condition ...
	I0429 14:33:14.956982 1960604 request.go:629] Waited for 184.200233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0429 14:33:14.957070 1960604 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0429 14:33:14.957080 1960604 round_trippers.go:469] Request Headers:
	I0429 14:33:14.957088 1960604 round_trippers.go:473]     Accept: application/json, */*
	I0429 14:33:14.957095 1960604 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0429 14:33:14.960012 1960604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 14:33:14.961301 1960604 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0429 14:33:14.961330 1960604 node_conditions.go:123] node cpu capacity is 2
	I0429 14:33:14.961341 1960604 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0429 14:33:14.961346 1960604 node_conditions.go:123] node cpu capacity is 2
	I0429 14:33:14.961351 1960604 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0429 14:33:14.961356 1960604 node_conditions.go:123] node cpu capacity is 2
	I0429 14:33:14.961361 1960604 node_conditions.go:105] duration metric: took 188.670415ms to run NodePressure ...
	I0429 14:33:14.961376 1960604 start.go:240] waiting for startup goroutines ...
	I0429 14:33:14.961397 1960604 start.go:254] writing updated cluster config ...
	I0429 14:33:14.961709 1960604 ssh_runner.go:195] Run: rm -f paused
	I0429 14:33:15.044239 1960604 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 14:33:15.048908 1960604 out.go:177] * Done! kubectl is now configured to use "ha-581657" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 29 14:32:42 ha-581657 crio[642]: time="2024-04-29 14:32:42.065740361Z" level=info msg="Updated default CNI network name to kindnet"
	Apr 29 14:32:42 ha-581657 crio[642]: time="2024-04-29 14:32:42.065760111Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Apr 29 14:32:42 ha-581657 crio[642]: time="2024-04-29 14:32:42.072241520Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Apr 29 14:32:42 ha-581657 crio[642]: time="2024-04-29 14:32:42.072279125Z" level=info msg="Updated default CNI network name to kindnet"
	Apr 29 14:32:42 ha-581657 conmon[1384]: conmon 4980c6d4fcd76e525a79 <ninfo>: container 1421 exited with status 1
	Apr 29 14:32:42 ha-581657 crio[642]: time="2024-04-29 14:32:42.330199037Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e4a16c16-2754-4716-93c9-57a07ffc3770 name=/runtime.v1.ImageService/ImageStatus
	Apr 29 14:32:42 ha-581657 crio[642]: time="2024-04-29 14:32:42.330424844Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e4a16c16-2754-4716-93c9-57a07ffc3770 name=/runtime.v1.ImageService/ImageStatus
	Apr 29 14:32:42 ha-581657 crio[642]: time="2024-04-29 14:32:42.331131154Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2e16b2ad-b83f-460b-87d1-7c297e8d2ab8 name=/runtime.v1.ImageService/ImageStatus
	Apr 29 14:32:42 ha-581657 crio[642]: time="2024-04-29 14:32:42.331324337Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2e16b2ad-b83f-460b-87d1-7c297e8d2ab8 name=/runtime.v1.ImageService/ImageStatus
	Apr 29 14:32:42 ha-581657 crio[642]: time="2024-04-29 14:32:42.332231889Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8a48edf0-a233-4f00-99f4-f9a4f17af0f3 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 29 14:32:42 ha-581657 crio[642]: time="2024-04-29 14:32:42.332335249Z" level=warning msg="Allowed annotations are specified for workload []"
	Apr 29 14:32:42 ha-581657 crio[642]: time="2024-04-29 14:32:42.347677227Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/87e4f10593d247df54058eef2dbdae9107386de02e7edf34fd967d0349968460/merged/etc/passwd: no such file or directory"
	Apr 29 14:32:42 ha-581657 crio[642]: time="2024-04-29 14:32:42.347906727Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/87e4f10593d247df54058eef2dbdae9107386de02e7edf34fd967d0349968460/merged/etc/group: no such file or directory"
	Apr 29 14:32:42 ha-581657 crio[642]: time="2024-04-29 14:32:42.394898075Z" level=info msg="Created container df34ebe090ba6c37028a3c86fe0daef648e36ff33d5bd09362f6e14dc332a9cb: kube-system/storage-provisioner/storage-provisioner" id=8a48edf0-a233-4f00-99f4-f9a4f17af0f3 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 29 14:32:42 ha-581657 crio[642]: time="2024-04-29 14:32:42.395686175Z" level=info msg="Starting container: df34ebe090ba6c37028a3c86fe0daef648e36ff33d5bd09362f6e14dc332a9cb" id=df10c799-1229-4228-86a7-ca9f4795db6a name=/runtime.v1.RuntimeService/StartContainer
	Apr 29 14:32:42 ha-581657 crio[642]: time="2024-04-29 14:32:42.417383859Z" level=info msg="Started container" PID=1872 containerID=df34ebe090ba6c37028a3c86fe0daef648e36ff33d5bd09362f6e14dc332a9cb description=kube-system/storage-provisioner/storage-provisioner id=df10c799-1229-4228-86a7-ca9f4795db6a name=/runtime.v1.RuntimeService/StartContainer sandboxID=02a11961ee55f5a331e749e625276b90229f359db6eae63f061cebc68e8d9db1
	Apr 29 14:32:47 ha-581657 crio[642]: time="2024-04-29 14:32:47.142680609Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.30.0" id=14f05f27-897f-40e7-969e-7cabf7c638b5 name=/runtime.v1.ImageService/ImageStatus
	Apr 29 14:32:47 ha-581657 crio[642]: time="2024-04-29 14:32:47.142892845Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1,RepoTags:[registry.k8s.io/kube-controller-manager:v1.30.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe registry.k8s.io/kube-controller-manager@sha256:63e991c4fc8bdc8fce68c183d152ba3ab560dc0a9b71ff97332a74a7605bbd3f],Size_:108229958,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=14f05f27-897f-40e7-969e-7cabf7c638b5 name=/runtime.v1.ImageService/ImageStatus
	Apr 29 14:32:47 ha-581657 crio[642]: time="2024-04-29 14:32:47.143692227Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.30.0" id=2addfc0f-365c-432f-9471-edcb0b617830 name=/runtime.v1.ImageService/ImageStatus
	Apr 29 14:32:47 ha-581657 crio[642]: time="2024-04-29 14:32:47.143893016Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1,RepoTags:[registry.k8s.io/kube-controller-manager:v1.30.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe registry.k8s.io/kube-controller-manager@sha256:63e991c4fc8bdc8fce68c183d152ba3ab560dc0a9b71ff97332a74a7605bbd3f],Size_:108229958,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=2addfc0f-365c-432f-9471-edcb0b617830 name=/runtime.v1.ImageService/ImageStatus
	Apr 29 14:32:47 ha-581657 crio[642]: time="2024-04-29 14:32:47.144849462Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-581657/kube-controller-manager" id=95a95cf3-8c88-4d98-848f-5147a206f63b name=/runtime.v1.RuntimeService/CreateContainer
	Apr 29 14:32:47 ha-581657 crio[642]: time="2024-04-29 14:32:47.144941106Z" level=warning msg="Allowed annotations are specified for workload []"
	Apr 29 14:32:47 ha-581657 crio[642]: time="2024-04-29 14:32:47.225200609Z" level=info msg="Created container a2326e4fb5a476e563367aca5dbf1fb98646d3ac6a0e1c19015d2e494ac20228: kube-system/kube-controller-manager-ha-581657/kube-controller-manager" id=95a95cf3-8c88-4d98-848f-5147a206f63b name=/runtime.v1.RuntimeService/CreateContainer
	Apr 29 14:32:47 ha-581657 crio[642]: time="2024-04-29 14:32:47.225702036Z" level=info msg="Starting container: a2326e4fb5a476e563367aca5dbf1fb98646d3ac6a0e1c19015d2e494ac20228" id=43f31c11-f0a0-4c08-a714-5cc8be67d018 name=/runtime.v1.RuntimeService/StartContainer
	Apr 29 14:32:47 ha-581657 crio[642]: time="2024-04-29 14:32:47.232838062Z" level=info msg="Started container" PID=1913 containerID=a2326e4fb5a476e563367aca5dbf1fb98646d3ac6a0e1c19015d2e494ac20228 description=kube-system/kube-controller-manager-ha-581657/kube-controller-manager id=43f31c11-f0a0-4c08-a714-5cc8be67d018 name=/runtime.v1.RuntimeService/StartContainer sandboxID=28c495556f09d3ac4b48f25d92a913427f0de168b36eb1a42734c5475eca83da
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a2326e4fb5a47       68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1   30 seconds ago       Running             kube-controller-manager   8                   28c495556f09d       kube-controller-manager-ha-581657
	df34ebe090ba6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   35 seconds ago       Running             storage-provisioner       5                   02a11961ee55f       storage-provisioner
	d0bc367dadcb6       adf781c1312f06f9d22bfc391f48c68e39ed1bfe4166c6ec09faea1a89f23d46   40 seconds ago       Running             kube-vip                  3                   d80a08804e76b       kube-vip-ha-581657
	4d4a0d3a743b1       181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb   45 seconds ago       Running             kube-apiserver            4                   54e32b885ba5c       kube-apiserver-ha-581657
	aca6e80b24f7c       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   About a minute ago   Running             coredns                   2                   d5c95ba2f0b86       coredns-7db6d8ff4d-qvn8n
	443e0007f976c       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   About a minute ago   Running             busybox                   2                   45e7cb2b8ee2b       busybox-fc5497c4f-jpbj7
	40cc82f3fdc69       cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f   About a minute ago   Running             kube-proxy                2                   24dae7ed55080       kube-proxy-d8t8s
	4980c6d4fcd76       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Exited              storage-provisioner       4                   02a11961ee55f       storage-provisioner
	85273a013cb8b       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   About a minute ago   Running             coredns                   2                   e4a2cb55bd088       coredns-7db6d8ff4d-9nqsr
	96eba157f55f8       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d   About a minute ago   Running             kindnet-cni               2                   14667c330a6af       kindnet-z64kr
	a0ef46ef9701e       68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1   About a minute ago   Exited              kube-controller-manager   7                   28c495556f09d       kube-controller-manager-ha-581657
	f2be2ccb714f0       547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a   About a minute ago   Running             kube-scheduler            2                   f2629a71c7c64       kube-scheduler-ha-581657
	9446d9588318c       adf781c1312f06f9d22bfc391f48c68e39ed1bfe4166c6ec09faea1a89f23d46   About a minute ago   Exited              kube-vip                  2                   d80a08804e76b       kube-vip-ha-581657
	ad0181d7acf7c       181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb   About a minute ago   Exited              kube-apiserver            3                   54e32b885ba5c       kube-apiserver-ha-581657
	5c5aa6ef46cbb       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd   About a minute ago   Running             etcd                      2                   a2aa3006ebcaf       etcd-ha-581657
	
	
	==> coredns [85273a013cb8bd1d37ee99f1e2ef433e1c92b5c28c050ad3f9451e132e4a23d7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56265 - 4403 "HINFO IN 8764190514139391721.4359758099340279566. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030408222s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[407574886]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Apr-2024 14:32:11.920) (total time: 30001ms):
	Trace[407574886]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (14:32:41.921)
	Trace[407574886]: [30.00161199s] [30.00161199s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1693969343]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Apr-2024 14:32:11.921) (total time: 30001ms):
	Trace[1693969343]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (14:32:41.922)
	Trace[1693969343]: [30.001489938s] [30.001489938s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1197777805]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Apr-2024 14:32:11.921) (total time: 30001ms):
	Trace[1197777805]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (14:32:41.923)
	Trace[1197777805]: [30.00156178s] [30.00156178s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [aca6e80b24f7c631e7f174e46ee613ab5071fc1e88f2d8cd398b38b698fc5643] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41119 - 53446 "HINFO IN 191980602826495390.8853643310987154023. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.027435841s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1341756005]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Apr-2024 14:32:12.091) (total time: 30000ms):
	Trace[1341756005]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (14:32:42.091)
	Trace[1341756005]: [30.000699501s] [30.000699501s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[69177611]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Apr-2024 14:32:12.092) (total time: 30001ms):
	Trace[69177611]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (14:32:42.093)
	Trace[69177611]: [30.001581677s] [30.001581677s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[497016025]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Apr-2024 14:32:12.091) (total time: 30001ms):
	Trace[497016025]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (14:32:42.093)
	Trace[497016025]: [30.001963615s] [30.001963615s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-581657
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-581657
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=94a01df0d4b48636d4af3d06a53be687e06c0844
	                    minikube.k8s.io/name=ha-581657
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T14_22_47_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 14:22:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-581657
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 14:32:27 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 14:31:57 +0000   Mon, 29 Apr 2024 14:33:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 14:31:57 +0000   Mon, 29 Apr 2024 14:33:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 14:31:57 +0000   Mon, 29 Apr 2024 14:33:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 14:31:57 +0000   Mon, 29 Apr 2024 14:33:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-581657
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 76085855fc394f09917b8a1518e64314
	  System UUID:                f5a030bd-ee1e-4b87-bcf3-81197e35a362
	  Boot ID:                    b8f2360a-0b19-4e04-aa8c-604719eae8f1
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jpbj7              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m22s
	  kube-system                 coredns-7db6d8ff4d-9nqsr             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     10m
	  kube-system                 coredns-7db6d8ff4d-qvn8n             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     10m
	  kube-system                 etcd-ha-581657                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-z64kr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-581657             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-581657    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-d8t8s                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-581657             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-581657                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             290Mi (3%!)(MISSING)  390Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 65s                    kube-proxy       
	  Normal  Starting                 4m40s                  kube-proxy       
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node ha-581657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node ha-581657 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node ha-581657 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           10m                    node-controller  Node ha-581657 event: Registered Node ha-581657 in Controller
	  Normal  NodeReady                9m46s                  kubelet          Node ha-581657 status is now: NodeReady
	  Normal  RegisteredNode           9m39s                  node-controller  Node ha-581657 event: Registered Node ha-581657 in Controller
	  Normal  RegisteredNode           8m43s                  node-controller  Node ha-581657 event: Registered Node ha-581657 in Controller
	  Normal  RegisteredNode           6m8s                   node-controller  Node ha-581657 event: Registered Node ha-581657 in Controller
	  Normal  NodeHasNoDiskPressure    5m34s (x8 over 5m34s)  kubelet          Node ha-581657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m34s (x8 over 5m34s)  kubelet          Node ha-581657 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m34s (x8 over 5m34s)  kubelet          Node ha-581657 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m34s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m46s                  node-controller  Node ha-581657 event: Registered Node ha-581657 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-581657 event: Registered Node ha-581657 in Controller
	  Normal  RegisteredNode           3m19s                  node-controller  Node ha-581657 event: Registered Node ha-581657 in Controller
	  Normal  Starting                 118s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)    kubelet          Node ha-581657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)    kubelet          Node ha-581657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x8 over 118s)    kubelet          Node ha-581657 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           72s                    node-controller  Node ha-581657 event: Registered Node ha-581657 in Controller
	  Normal  RegisteredNode           16s                    node-controller  Node ha-581657 event: Registered Node ha-581657 in Controller
	  Normal  NodeNotReady             6s                     node-controller  Node ha-581657 status is now: NodeNotReady
	
	
	Name:               ha-581657-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-581657-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=94a01df0d4b48636d4af3d06a53be687e06c0844
	                    minikube.k8s.io/name=ha-581657
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T14_23_21_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 14:23:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-581657-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 14:33:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 14:31:49 +0000   Mon, 29 Apr 2024 14:23:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 14:31:49 +0000   Mon, 29 Apr 2024 14:23:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 14:31:49 +0000   Mon, 29 Apr 2024 14:23:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 14:31:49 +0000   Mon, 29 Apr 2024 14:23:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-581657-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 9ec06eb2217a454a83485e3dd80bc2f7
	  System UUID:                269b30b3-25bb-4dc0-b533-a96f4abe958e
	  Boot ID:                    b8f2360a-0b19-4e04-aa8c-604719eae8f1
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-sshpb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m22s
	  kube-system                 etcd-ha-581657-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         9m58s
	  kube-system                 kindnet-xp94m                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m59s
	  kube-system                 kube-apiserver-ha-581657-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m58s
	  kube-system                 kube-controller-manager-ha-581657-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m58s
	  kube-system                 kube-proxy-zhbtq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m59s
	  kube-system                 kube-scheduler-ha-581657-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m58s
	  kube-system                 kube-vip-ha-581657-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (1%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m20s                  kube-proxy       
	  Normal  Starting                 9m55s                  kube-proxy       
	  Normal  Starting                 73s                    kube-proxy       
	  Normal  Starting                 4m50s                  kube-proxy       
	  Normal  NodeHasSufficientPID     9m59s (x8 over 9m59s)  kubelet          Node ha-581657-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    9m59s (x8 over 9m59s)  kubelet          Node ha-581657-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  9m59s (x8 over 9m59s)  kubelet          Node ha-581657-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           9m54s                  node-controller  Node ha-581657-m02 event: Registered Node ha-581657-m02 in Controller
	  Normal  RegisteredNode           9m39s                  node-controller  Node ha-581657-m02 event: Registered Node ha-581657-m02 in Controller
	  Normal  RegisteredNode           8m43s                  node-controller  Node ha-581657-m02 event: Registered Node ha-581657-m02 in Controller
	  Normal  NodeHasSufficientPID     6m45s (x8 over 6m45s)  kubelet          Node ha-581657-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    6m45s (x8 over 6m45s)  kubelet          Node ha-581657-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m45s (x8 over 6m45s)  kubelet          Node ha-581657-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           6m8s                   node-controller  Node ha-581657-m02 event: Registered Node ha-581657-m02 in Controller
	  Normal  NodeHasSufficientPID     5m32s (x8 over 5m32s)  kubelet          Node ha-581657-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m32s (x8 over 5m32s)  kubelet          Node ha-581657-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m32s (x8 over 5m32s)  kubelet          Node ha-581657-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           4m46s                  node-controller  Node ha-581657-m02 event: Registered Node ha-581657-m02 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-581657-m02 event: Registered Node ha-581657-m02 in Controller
	  Normal  RegisteredNode           3m19s                  node-controller  Node ha-581657-m02 event: Registered Node ha-581657-m02 in Controller
	  Normal  Starting                 116s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)    kubelet          Node ha-581657-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)    kubelet          Node ha-581657-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x8 over 116s)    kubelet          Node ha-581657-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           72s                    node-controller  Node ha-581657-m02 event: Registered Node ha-581657-m02 in Controller
	  Normal  RegisteredNode           16s                    node-controller  Node ha-581657-m02 event: Registered Node ha-581657-m02 in Controller
	
	
	Name:               ha-581657-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-581657-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=94a01df0d4b48636d4af3d06a53be687e06c0844
	                    minikube.k8s.io/name=ha-581657
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T14_25_18_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 14:25:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-581657-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 14:33:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 14:32:58 +0000   Mon, 29 Apr 2024 14:32:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 14:32:58 +0000   Mon, 29 Apr 2024 14:32:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 14:32:58 +0000   Mon, 29 Apr 2024 14:32:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 14:32:58 +0000   Mon, 29 Apr 2024 14:32:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-581657-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 0ee654090c4f4ed88f117846aace039e
	  System UUID:                391e97e0-35c9-416a-85c7-3edbf244440d
	  Boot ID:                    b8f2360a-0b19-4e04-aa8c-604719eae8f1
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pfc5z    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m54s
	  kube-system                 kindnet-7prmx              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m59s
	  kube-system                 kube-proxy-hshwx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m57s                  kube-proxy       
	  Normal  Starting                 10s                    kube-proxy       
	  Normal  Starting                 2m58s                  kube-proxy       
	  Normal  NodeHasSufficientPID     7m59s (x2 over 7m59s)  kubelet          Node ha-581657-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m59s                  node-controller  Node ha-581657-m04 event: Registered Node ha-581657-m04 in Controller
	  Normal  NodeHasNoDiskPressure    7m59s (x2 over 7m59s)  kubelet          Node ha-581657-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m59s (x2 over 7m59s)  kubelet          Node ha-581657-m04 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           7m59s                  node-controller  Node ha-581657-m04 event: Registered Node ha-581657-m04 in Controller
	  Normal  RegisteredNode           7m58s                  node-controller  Node ha-581657-m04 event: Registered Node ha-581657-m04 in Controller
	  Normal  NodeReady                7m27s                  kubelet          Node ha-581657-m04 status is now: NodeReady
	  Normal  RegisteredNode           6m8s                   node-controller  Node ha-581657-m04 event: Registered Node ha-581657-m04 in Controller
	  Normal  RegisteredNode           4m46s                  node-controller  Node ha-581657-m04 event: Registered Node ha-581657-m04 in Controller
	  Normal  NodeNotReady             4m6s                   node-controller  Node ha-581657-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-581657-m04 event: Registered Node ha-581657-m04 in Controller
	  Normal  RegisteredNode           3m19s                  node-controller  Node ha-581657-m04 event: Registered Node ha-581657-m04 in Controller
	  Normal  Starting                 3m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m3s (x8 over 3m15s)   kubelet          Node ha-581657-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x8 over 3m15s)   kubelet          Node ha-581657-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x8 over 3m15s)   kubelet          Node ha-581657-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           72s                    node-controller  Node ha-581657-m04 event: Registered Node ha-581657-m04 in Controller
	  Normal  NodeNotReady             32s                    node-controller  Node ha-581657-m04 status is now: NodeNotReady
	  Normal  Starting                 31s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 31s)      kubelet          Node ha-581657-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 31s)      kubelet          Node ha-581657-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x8 over 31s)      kubelet          Node ha-581657-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16s                    node-controller  Node ha-581657-m04 event: Registered Node ha-581657-m04 in Controller
	
	
	==> dmesg <==
	[  +0.001121] FS-Cache: O-key=[8] 'ef445c0100000000'
	[  +0.000833] FS-Cache: N-cookie c=00000078 [p=0000006f fl=2 nc=0 na=1]
	[  +0.001039] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000c85fb157
	[  +0.001211] FS-Cache: N-key=[8] 'ef445c0100000000'
	[  +0.002692] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=00000072 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001189] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=0000000050015021
	[  +0.001188] FS-Cache: O-key=[8] 'ef445c0100000000'
	[  +0.000776] FS-Cache: N-cookie c=00000079 [p=0000006f fl=2 nc=0 na=1]
	[  +0.000979] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=000000008e2f8fe5
	[  +0.001268] FS-Cache: N-key=[8] 'ef445c0100000000'
	[  +3.179364] FS-Cache: Duplicate cookie detected
	[  +0.000801] FS-Cache: O-cookie c=00000070 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001026] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=00000000c9ff6823
	[  +0.001183] FS-Cache: O-key=[8] 'ee445c0100000000'
	[  +0.000813] FS-Cache: N-cookie c=0000007b [p=0000006f fl=2 nc=0 na=1]
	[  +0.000953] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000c85fb157
	[  +0.001094] FS-Cache: N-key=[8] 'ee445c0100000000'
	[  +0.286519] FS-Cache: Duplicate cookie detected
	[  +0.000750] FS-Cache: O-cookie c=00000075 [p=0000006f fl=226 nc=0 na=1]
	[  +0.000964] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=00000000adc83d13
	[  +0.001065] FS-Cache: O-key=[8] 'f4445c0100000000'
	[  +0.000785] FS-Cache: N-cookie c=0000007c [p=0000006f fl=2 nc=0 na=1]
	[  +0.000978] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000652a63b0
	[  +0.001125] FS-Cache: N-key=[8] 'f4445c0100000000'
	
	
	==> etcd [5c5aa6ef46cbbfb0d27fabf9b206e45d8d5c3bff520883fd012c215632170183] <==
	{"level":"warn","ts":"2024-04-29T14:31:45.374678Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T14:31:43.99832Z","time spent":"1.376347441s","remote":"127.0.0.1:35072","response type":"/etcdserverpb.KV/Range","request count":0,"request size":39,"response count":12,"response size":7116,"request content":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" limit:10000 "}
	{"level":"warn","ts":"2024-04-29T14:31:45.312772Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.314456665s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T14:31:45.374925Z","caller":"traceutil/trace.go:171","msg":"trace[1247031958] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:2495; }","duration":"1.376607058s","start":"2024-04-29T14:31:43.99831Z","end":"2024-04-29T14:31:45.374918Z","steps":["trace[1247031958] 'agreement among raft nodes before linearized reading'  (duration: 1.314447828s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T14:31:45.374952Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T14:31:43.998299Z","time spent":"1.376642587s","remote":"127.0.0.1:35046","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":29,"request content":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 "}
	{"level":"warn","ts":"2024-04-29T14:31:45.296658Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.298379445s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" limit:10000 ","response":"range_response_count:55 size:39193"}
	{"level":"info","ts":"2024-04-29T14:31:45.37509Z","caller":"traceutil/trace.go:171","msg":"trace[1865508166] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:55; response_revision:2495; }","duration":"1.376812226s","start":"2024-04-29T14:31:43.998268Z","end":"2024-04-29T14:31:45.37508Z","steps":["trace[1865508166] 'agreement among raft nodes before linearized reading'  (duration: 1.298000715s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T14:31:45.375114Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T14:31:43.998257Z","time spent":"1.376847032s","remote":"127.0.0.1:35090","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":55,"response size":39217,"request content":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" limit:10000 "}
	{"level":"info","ts":"2024-04-29T14:31:45.322199Z","caller":"traceutil/trace.go:171","msg":"trace[810183868] range","detail":"{range_begin:/registry/jobs/; range_end:/registry/jobs0; response_count:0; response_revision:2495; }","duration":"1.342550722s","start":"2024-04-29T14:31:43.979629Z","end":"2024-04-29T14:31:45.322179Z","steps":["trace[810183868] 'agreement among raft nodes before linearized reading'  (duration: 1.326799072s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T14:31:45.375311Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T14:31:43.979621Z","time spent":"1.395680994s","remote":"127.0.0.1:34972","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":0,"response size":29,"request content":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:10000 "}
	{"level":"info","ts":"2024-04-29T14:31:45.378952Z","caller":"traceutil/trace.go:171","msg":"trace[590059448] range","detail":"{range_begin:/registry/validatingadmissionpolicybindings/; range_end:/registry/validatingadmissionpolicybindings0; response_count:0; response_revision:2495; }","duration":"1.259071138s","start":"2024-04-29T14:31:44.119869Z","end":"2024-04-29T14:31:45.37894Z","steps":["trace[590059448] 'agreement among raft nodes before linearized reading'  (duration: 1.191775266s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T14:31:45.379012Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T14:31:44.119841Z","time spent":"1.259155313s","remote":"127.0.0.1:35278","response type":"/etcdserverpb.KV/Range","request count":0,"request size":95,"response count":0,"response size":29,"request content":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" limit:500 "}
	{"level":"info","ts":"2024-04-29T14:31:45.379113Z","caller":"traceutil/trace.go:171","msg":"trace[1615378684] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; response_count:0; response_revision:2495; }","duration":"1.330331244s","start":"2024-04-29T14:31:44.048776Z","end":"2024-04-29T14:31:45.379107Z","steps":["trace[1615378684] 'agreement among raft nodes before linearized reading'  (duration: 1.262889084s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T14:31:45.379145Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T14:31:44.048749Z","time spent":"1.330389114s","remote":"127.0.0.1:34894","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":29,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:500 "}
	{"level":"info","ts":"2024-04-29T14:31:45.379261Z","caller":"traceutil/trace.go:171","msg":"trace[31730381] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:13; response_revision:2495; }","duration":"1.338186466s","start":"2024-04-29T14:31:44.041066Z","end":"2024-04-29T14:31:45.379252Z","steps":["trace[31730381] 'agreement among raft nodes before linearized reading'  (duration: 1.270619703s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T14:31:45.379292Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T14:31:44.041035Z","time spent":"1.338247011s","remote":"127.0.0.1:35172","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":13,"response size":14421,"request content":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" limit:500 "}
	{"level":"info","ts":"2024-04-29T14:31:45.3794Z","caller":"traceutil/trace.go:171","msg":"trace[1831169656] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; response_count:42; response_revision:2495; }","duration":"1.348746337s","start":"2024-04-29T14:31:44.030646Z","end":"2024-04-29T14:31:45.379392Z","steps":["trace[1831169656] 'agreement among raft nodes before linearized reading'  (duration: 1.281111529s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T14:31:45.379431Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T14:31:44.030615Z","time spent":"1.348806832s","remote":"127.0.0.1:34936","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":42,"response size":9241,"request content":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" limit:500 "}
	{"level":"info","ts":"2024-04-29T14:31:45.379582Z","caller":"traceutil/trace.go:171","msg":"trace[220289048] range","detail":"{range_begin:/registry/csidrivers/; range_end:/registry/csidrivers0; response_count:0; response_revision:2495; }","duration":"1.378125729s","start":"2024-04-29T14:31:44.001449Z","end":"2024-04-29T14:31:45.379574Z","steps":["trace[220289048] 'agreement among raft nodes before linearized reading'  (duration: 1.310443414s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T14:31:45.379614Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T14:31:44.001446Z","time spent":"1.378159755s","remote":"127.0.0.1:35134","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":29,"request content":"key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" limit:10000 "}
	{"level":"info","ts":"2024-04-29T14:31:45.379761Z","caller":"traceutil/trace.go:171","msg":"trace[1362796364] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:8; response_revision:2495; }","duration":"1.378314978s","start":"2024-04-29T14:31:44.001439Z","end":"2024-04-29T14:31:45.379754Z","steps":["trace[1362796364] 'agreement among raft nodes before linearized reading'  (duration: 1.31047648s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T14:31:45.379794Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T14:31:44.001429Z","time spent":"1.378355134s","remote":"127.0.0.1:35164","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":8,"response size":5443,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:10000 "}
	{"level":"info","ts":"2024-04-29T14:31:45.37992Z","caller":"traceutil/trace.go:171","msg":"trace[166936622] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:2; response_revision:2495; }","duration":"1.37849069s","start":"2024-04-29T14:31:44.001423Z","end":"2024-04-29T14:31:45.379914Z","steps":["trace[166936622] 'agreement among raft nodes before linearized reading'  (duration: 1.310547216s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T14:31:45.37995Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T14:31:44.001414Z","time spent":"1.378528294s","remote":"127.0.0.1:35106","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":2,"response size":936,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:10000 "}
	{"level":"warn","ts":"2024-04-29T14:31:46.425997Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"969b80f77df94369","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T14:31:46.426077Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"969b80f77df94369","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	
	
	==> kernel <==
	 14:33:17 up 10:15,  0 users,  load average: 1.46, 1.82, 1.97
	Linux ha-581657 5.15.0-1058-aws #64~20.04.1-Ubuntu SMP Tue Apr 9 11:11:55 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [96eba157f55f89d01694468e672efc36da5d0ce640f7139fec302e55ca11491b] <==
	I0429 14:32:42.050511       1 main.go:227] handling current node
	I0429 14:32:42.053775       1 main.go:223] Handling node with IPs: map[192.168.49.3:{}]
	I0429 14:32:42.053813       1 main.go:250] Node ha-581657-m02 has CIDR [10.244.1.0/24] 
	I0429 14:32:42.053963       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0} 
	I0429 14:32:42.054042       1 main.go:223] Handling node with IPs: map[192.168.49.5:{}]
	I0429 14:32:42.054062       1 main.go:250] Node ha-581657-m04 has CIDR [10.244.3.0/24] 
	I0429 14:32:42.054126       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0} 
	I0429 14:32:52.067626       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:32:52.067658       1 main.go:227] handling current node
	I0429 14:32:52.067670       1 main.go:223] Handling node with IPs: map[192.168.49.3:{}]
	I0429 14:32:52.067676       1 main.go:250] Node ha-581657-m02 has CIDR [10.244.1.0/24] 
	I0429 14:32:52.067768       1 main.go:223] Handling node with IPs: map[192.168.49.5:{}]
	I0429 14:32:52.067781       1 main.go:250] Node ha-581657-m04 has CIDR [10.244.3.0/24] 
	I0429 14:33:02.081601       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:33:02.081641       1 main.go:227] handling current node
	I0429 14:33:02.081656       1 main.go:223] Handling node with IPs: map[192.168.49.3:{}]
	I0429 14:33:02.081662       1 main.go:250] Node ha-581657-m02 has CIDR [10.244.1.0/24] 
	I0429 14:33:02.081771       1 main.go:223] Handling node with IPs: map[192.168.49.5:{}]
	I0429 14:33:02.081791       1 main.go:250] Node ha-581657-m04 has CIDR [10.244.3.0/24] 
	I0429 14:33:12.096458       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0429 14:33:12.096486       1 main.go:227] handling current node
	I0429 14:33:12.096497       1 main.go:223] Handling node with IPs: map[192.168.49.3:{}]
	I0429 14:33:12.096503       1 main.go:250] Node ha-581657-m02 has CIDR [10.244.1.0/24] 
	I0429 14:33:12.096612       1 main.go:223] Handling node with IPs: map[192.168.49.5:{}]
	I0429 14:33:12.096625       1 main.go:250] Node ha-581657-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4d4a0d3a743b1b16c7a7a98e939459588679a075ce6c8296aa8e907630ec19db] <==
	I0429 14:32:36.195831       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0429 14:32:36.195874       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0429 14:32:35.876029       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0429 14:32:36.196155       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0429 14:32:36.196283       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0429 14:32:36.295983       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 14:32:36.296071       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 14:32:36.300781       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 14:32:36.301820       1 aggregator.go:165] initial CRD sync complete...
	I0429 14:32:36.301848       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 14:32:36.301856       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 14:32:36.301862       1 cache.go:39] Caches are synced for autoregister controller
	I0429 14:32:36.364197       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 14:32:36.369738       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 14:32:36.369767       1 policy_source.go:224] refreshing policies
	I0429 14:32:36.375494       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 14:32:36.375884       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 14:32:36.376633       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 14:32:36.377178       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 14:32:36.385992       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 14:32:36.386166       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0429 14:32:36.884918       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0429 14:32:37.303374       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0429 14:32:37.305458       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 14:32:37.328422       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [ad0181d7acf7c3ebfc0b1cc825ef1fc726880557d9aa5b16f29e8100ee76ad14] <==
	Trace[93169979]: [1.587397126s] [1.587397126s] END
	I0429 14:31:45.403127       1 trace.go:236] Trace[188726420]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:9a66c637-ff81-4cb2-8b80-6806f8aa428b,client:::1,api-group:,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:serviceaccounts,scope:cluster,url:/api/v1/serviceaccounts,user-agent:kube-apiserver/v1.30.0 (linux/arm64) kubernetes/7c48c2b,verb:LIST (29-Apr-2024 14:31:44.030) (total time: 1372ms):
	Trace[188726420]: ["List(recursive=true) etcd3" audit-id:9a66c637-ff81-4cb2-8b80-6806f8aa428b,key:/serviceaccounts,resourceVersion:0,resourceVersionMatch:,limit:500,continue: 1372ms (14:31:44.030)]
	Trace[188726420]: [1.372814318s] [1.372814318s] END
	I0429 14:31:45.404885       1 trace.go:236] Trace[618722573]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:b11278da-664e-4544-9f6a-d7c18ba7279f,client:::1,api-group:apiregistration.k8s.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:apiservices,scope:cluster,url:/apis/apiregistration.k8s.io/v1/apiservices,user-agent:kube-apiserver/v1.30.0 (linux/arm64) kubernetes/7c48c2b,verb:LIST (29-Apr-2024 14:31:44.181) (total time: 1223ms):
	Trace[618722573]: ["List(recursive=true) etcd3" audit-id:b11278da-664e-4544-9f6a-d7c18ba7279f,key:/apiregistration.k8s.io/apiservices,resourceVersion:0,resourceVersionMatch:,limit:500,continue: 1223ms (14:31:44.181)]
	Trace[618722573]: [1.223750038s] [1.223750038s] END
	I0429 14:31:45.405061       1 trace.go:236] Trace[718299711]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:dd1db031-1bfb-4842-aaa5-3143fe5ae9a3,client:::1,api-group:flowcontrol.apiserver.k8s.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:flowschemas,scope:cluster,url:/apis/flowcontrol.apiserver.k8s.io/v1/flowschemas,user-agent:kube-apiserver/v1.30.0 (linux/arm64) kubernetes/7c48c2b,verb:LIST (29-Apr-2024 14:31:44.040) (total time: 1364ms):
	Trace[718299711]: ["List(recursive=true) etcd3" audit-id:dd1db031-1bfb-4842-aaa5-3143fe5ae9a3,key:/flowschemas,resourceVersion:0,resourceVersionMatch:,limit:500,continue: 1364ms (14:31:44.040)]
	Trace[718299711]: [1.364333245s] [1.364333245s] END
	I0429 14:31:45.407783       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 14:31:45.408004       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 14:31:45.409097       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 14:31:45.412154       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 14:31:45.416882       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 14:31:45.420810       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 14:31:45.420838       1 policy_source.go:224] refreshing policies
	I0429 14:31:45.423882       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0429 14:31:45.425807       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 14:31:45.425970       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0429 14:31:45.439976       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0429 14:31:45.468495       1 cache.go:39] Caches are synced for autoregister controller
	I0429 14:31:45.481183       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 14:31:45.489856       1 shared_informer.go:320] Caches are synced for node_authorizer
	F0429 14:32:31.317588       1 hooks.go:203] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [a0ef46ef9701e53a771b780ffa018faf1458cce5f40f425fb10f71483d83f0ab] <==
	I0429 14:32:13.761147       1 serving.go:380] Generated self-signed cert in-memory
	I0429 14:32:14.570556       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0429 14:32:14.570584       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 14:32:14.572024       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0429 14:32:14.572283       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0429 14:32:14.572301       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 14:32:14.572313       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0429 14:32:24.588960       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-namespaces-controller
ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [a2326e4fb5a476e563367aca5dbf1fb98646d3ac6a0e1c19015d2e494ac20228] <==
	I0429 14:33:01.035099       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0429 14:33:01.038325       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0429 14:33:01.042394       1 shared_informer.go:320] Caches are synced for GC
	I0429 14:33:01.053485       1 shared_informer.go:320] Caches are synced for taint
	I0429 14:33:01.053850       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0429 14:33:01.069886       1 shared_informer.go:320] Caches are synced for persistent volume
	I0429 14:33:01.077061       1 shared_informer.go:320] Caches are synced for PV protection
	I0429 14:33:01.118088       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 14:33:01.122886       1 shared_informer.go:320] Caches are synced for namespace
	I0429 14:33:01.162838       1 shared_informer.go:320] Caches are synced for service account
	I0429 14:33:01.167252       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 14:33:01.170324       1 shared_informer.go:320] Caches are synced for attach detach
	I0429 14:33:01.196118       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-581657"
	I0429 14:33:01.196195       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-581657-m04"
	I0429 14:33:01.196252       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-581657-m02"
	I0429 14:33:01.197832       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0429 14:33:01.640029       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 14:33:01.640063       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 14:33:01.654954       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 14:33:07.136387       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.064µs"
	I0429 14:33:08.391734       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.496222ms"
	I0429 14:33:08.392173       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.568µs"
	I0429 14:33:11.080517       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-581657-m04"
	I0429 14:33:11.202446       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.067175ms"
	I0429 14:33:11.202539       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.482µs"
	
	
	==> kube-proxy [40cc82f3fdc6955e09738bd9dc4c281de0f2972a12f3dd7037bd1279af9a9d66] <==
	I0429 14:32:12.240291       1 server_linux.go:69] "Using iptables proxy"
	I0429 14:32:12.293625       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0429 14:32:12.570638       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0429 14:32:12.570764       1 server_linux.go:165] "Using iptables Proxier"
	I0429 14:32:12.574034       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0429 14:32:12.574137       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0429 14:32:12.574236       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 14:32:12.574524       1 server.go:872] "Version info" version="v1.30.0"
	I0429 14:32:12.574839       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 14:32:12.577037       1 config.go:192] "Starting service config controller"
	I0429 14:32:12.577101       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 14:32:12.577153       1 config.go:101] "Starting endpoint slice config controller"
	I0429 14:32:12.577182       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 14:32:12.577814       1 config.go:319] "Starting node config controller"
	I0429 14:32:12.579817       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 14:32:12.677870       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 14:32:12.677987       1 shared_informer.go:320] Caches are synced for service config
	I0429 14:32:12.680569       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f2be2ccb714f0ab7852a81362ee68b86f0411c14937788d59cb96e1516ff4158] <==
	E0429 14:31:41.324727       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 14:31:41.352694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 14:31:41.352793       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 14:31:42.052147       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 14:31:42.052187       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 14:31:42.070220       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 14:31:42.070367       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 14:31:42.148814       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 14:31:42.148860       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0429 14:31:50.672217       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0429 14:32:36.233016       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:52554->192.168.49.2:8443: read: connection reset by peer
	E0429 14:32:36.233111       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:52550->192.168.49.2:8443: read: connection reset by peer
	E0429 14:32:36.233183       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:52488->192.168.49.2:8443: read: connection reset by peer
	E0429 14:32:36.233243       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:52530->192.168.49.2:8443: read: connection reset by peer
	E0429 14:32:36.233306       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:52520->192.168.49.2:8443: read: connection reset by peer
	E0429 14:32:36.233388       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:52476->192.168.49.2:8443: read: connection reset by peer
	E0429 14:32:36.233441       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:52462->192.168.49.2:8443: read: connection reset by peer
	E0429 14:32:36.233493       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:52536->192.168.49.2:8443: read: connection reset by peer
	E0429 14:32:36.233548       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:52496->192.168.49.2:8443: read: connection reset by peer
	E0429 14:32:36.233598       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:52492->192.168.49.2:8443: read: connection reset by peer
	E0429 14:32:36.233744       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:52558->192.168.49.2:8443: read: connection reset by peer
	E0429 14:32:36.233805       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:52504->192.168.49.2:8443: read: connection reset by peer
	E0429 14:32:36.233860       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:52480->192.168.49.2:8443: read: connection reset by peer
	E0429 14:32:36.233915       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:52556->192.168.49.2:8443: read: connection reset by peer
	E0429 14:32:36.260974       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:52572->192.168.49.2:8443: read: connection reset by peer
	
	
	==> kubelet <==
	Apr 29 14:32:25 ha-581657 kubelet[757]: I0429 14:32:25.289137     757 scope.go:117] "RemoveContainer" containerID="a0ef46ef9701e53a771b780ffa018faf1458cce5f40f425fb10f71483d83f0ab"
	Apr 29 14:32:25 ha-581657 kubelet[757]: E0429 14:32:25.289556     757 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-581657_kube-system(b1dc58683d5eeca26640876d3f656aec)\"" pod="kube-system/kube-controller-manager-ha-581657" podUID="b1dc58683d5eeca26640876d3f656aec"
	Apr 29 14:32:27 ha-581657 kubelet[757]: I0429 14:32:27.492306     757 scope.go:117] "RemoveContainer" containerID="a0ef46ef9701e53a771b780ffa018faf1458cce5f40f425fb10f71483d83f0ab"
	Apr 29 14:32:27 ha-581657 kubelet[757]: E0429 14:32:27.492857     757 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-581657_kube-system(b1dc58683d5eeca26640876d3f656aec)\"" pod="kube-system/kube-controller-manager-ha-581657" podUID="b1dc58683d5eeca26640876d3f656aec"
	Apr 29 14:32:31 ha-581657 kubelet[757]: E0429 14:32:31.475426     757 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/coredns-7db6d8ff4d-9nqsr.17cac6bf1b4656fb\": unexpected EOF" event="&Event{ObjectMeta:{coredns-7db6d8ff4d-9nqsr.17cac6bf1b4656fb  kube-system   2785 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7db6d8ff4d-9nqsr,UID:03cf70a1-960e-4ac9-bb97-ed66df6d64aa,APIVersion:v1,ResourceVersion:2017,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 503,Source:EventSource{Component:kubelet,Host:ha-581657,},FirstTimestamp:2024-04-29 14:32:12 +0000 UTC,LastTimestamp:2024-04-29 14:32:31.403492749 +0000 UTC m=+72.475069821,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingControl
ler:kubelet,ReportingInstance:ha-581657,}"
	Apr 29 14:32:31 ha-581657 kubelet[757]: I0429 14:32:31.843924     757 scope.go:117] "RemoveContainer" containerID="a0ef46ef9701e53a771b780ffa018faf1458cce5f40f425fb10f71483d83f0ab"
	Apr 29 14:32:31 ha-581657 kubelet[757]: E0429 14:32:31.844427     757 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-581657_kube-system(b1dc58683d5eeca26640876d3f656aec)\"" pod="kube-system/kube-controller-manager-ha-581657" podUID="b1dc58683d5eeca26640876d3f656aec"
	Apr 29 14:32:32 ha-581657 kubelet[757]: E0429 14:32:32.281522     757 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/coredns-7db6d8ff4d-9nqsr.17cac6bf1b4656fb\": dial tcp 192.168.49.254:8443: connect: connection refused" event="&Event{ObjectMeta:{coredns-7db6d8ff4d-9nqsr.17cac6bf1b4656fb  kube-system   2785 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7db6d8ff4d-9nqsr,UID:03cf70a1-960e-4ac9-bb97-ed66df6d64aa,APIVersion:v1,ResourceVersion:2017,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 503,Source:EventSource{Component:kubelet,Host:ha-581657,},FirstTimestamp:2024-04-29 14:32:12 +0000 UTC,LastTimestamp:2024-04-29 14:32:31.403492749 +0000 UTC m=+72.475069821,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Seri
es:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-581657,}"
	Apr 29 14:32:32 ha-581657 kubelet[757]: I0429 14:32:32.303426     757 scope.go:117] "RemoveContainer" containerID="ad0181d7acf7c3ebfc0b1cc825ef1fc726880557d9aa5b16f29e8100ee76ad14"
	Apr 29 14:32:32 ha-581657 kubelet[757]: I0429 14:32:32.304462     757 status_manager.go:853] "Failed to get status for pod" podUID="8c8e8205492841543bc49e8452e1eee0" pod="kube-system/kube-apiserver-ha-581657" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-581657\": dial tcp 192.168.49.254:8443: connect: connection refused"
	Apr 29 14:32:36 ha-581657 kubelet[757]: E0429 14:32:36.203941     757 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:47850->192.168.49.254:8443: read: connection reset by peer
	Apr 29 14:32:36 ha-581657 kubelet[757]: E0429 14:32:36.204218     757 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:47934->192.168.49.254:8443: read: connection reset by peer
	Apr 29 14:32:36 ha-581657 kubelet[757]: E0429 14:32:36.204276     757 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:47860->192.168.49.254:8443: read: connection reset by peer
	Apr 29 14:32:36 ha-581657 kubelet[757]: E0429 14:32:36.204325     757 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:47896->192.168.49.254:8443: read: connection reset by peer
	Apr 29 14:32:37 ha-581657 kubelet[757]: I0429 14:32:37.316451     757 scope.go:117] "RemoveContainer" containerID="9446d9588318c7ccb86dffe824fc6e64469e86c2e6c272235e1e5e364a47200e"
	Apr 29 14:32:42 ha-581657 kubelet[757]: I0429 14:32:42.329580     757 scope.go:117] "RemoveContainer" containerID="4980c6d4fcd76e525a79ab1c6b11d6f905fa271b3d9424ba9d05d703d10b78a9"
	Apr 29 14:32:47 ha-581657 kubelet[757]: I0429 14:32:47.141987     757 scope.go:117] "RemoveContainer" containerID="a0ef46ef9701e53a771b780ffa018faf1458cce5f40f425fb10f71483d83f0ab"
	Apr 29 14:32:47 ha-581657 kubelet[757]: E0429 14:32:47.761398     757 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-581657\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-581657?resourceVersion=0&timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Apr 29 14:32:47 ha-581657 kubelet[757]: E0429 14:32:47.767657     757 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-581657?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Apr 29 14:32:57 ha-581657 kubelet[757]: E0429 14:32:57.762355     757 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-581657\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-581657?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Apr 29 14:32:57 ha-581657 kubelet[757]: E0429 14:32:57.768744     757 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-581657?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Apr 29 14:33:07 ha-581657 kubelet[757]: E0429 14:33:07.763187     757 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-581657\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-581657?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Apr 29 14:33:07 ha-581657 kubelet[757]: E0429 14:33:07.769600     757 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-581657?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Apr 29 14:33:17 ha-581657 kubelet[757]: E0429 14:33:17.763837     757 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-581657\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-581657?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Apr 29 14:33:17 ha-581657 kubelet[757]: E0429 14:33:17.770260     757 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-581657?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-581657 -n ha-581657
helpers_test.go:261: (dbg) Run:  kubectl --context ha-581657 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (127.45s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.1s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-432914 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-432914 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.516757027s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-432914] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18771-1897267/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-1897267/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "pause-432914" primary control-plane node in "pause-432914" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Updating the running docker "pause-432914" container ...
	* Preparing Kubernetes v1.30.0 on CRI-O 1.24.6 ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-432914" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 14:56:59.856941 2066013 out.go:291] Setting OutFile to fd 1 ...
	I0429 14:56:59.857109 2066013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:56:59.857118 2066013 out.go:304] Setting ErrFile to fd 2...
	I0429 14:56:59.857123 2066013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:56:59.857373 2066013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-1897267/.minikube/bin
	I0429 14:56:59.857746 2066013 out.go:298] Setting JSON to false
	I0429 14:56:59.858751 2066013 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":38364,"bootTime":1714364256,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0429 14:56:59.858833 2066013 start.go:139] virtualization:  
	I0429 14:56:59.863461 2066013 out.go:177] * [pause-432914] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0429 14:56:59.866098 2066013 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 14:56:59.866138 2066013 notify.go:220] Checking for updates...
	I0429 14:56:59.868797 2066013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 14:56:59.870876 2066013 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18771-1897267/kubeconfig
	I0429 14:56:59.873103 2066013 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-1897267/.minikube
	I0429 14:56:59.875290 2066013 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0429 14:56:59.877785 2066013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 14:56:59.880268 2066013 config.go:182] Loaded profile config "pause-432914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:56:59.880950 2066013 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 14:56:59.903091 2066013 docker.go:122] docker version: linux-26.1.0:Docker Engine - Community
	I0429 14:56:59.903212 2066013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:56:59.967278 2066013 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:62 SystemTime:2024-04-29 14:56:59.958094854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:56:59.967378 2066013 docker.go:295] overlay module found
	I0429 14:56:59.969951 2066013 out.go:177] * Using the docker driver based on existing profile
	I0429 14:56:59.971975 2066013 start.go:297] selected driver: docker
	I0429 14:56:59.972003 2066013 start.go:901] validating driver "docker" against &{Name:pause-432914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-432914 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alias
es:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 14:56:59.972120 2066013 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 14:56:59.972213 2066013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:57:00.111129 2066013 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:62 SystemTime:2024-04-29 14:57:00.070054459 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:57:00.111792 2066013 cni.go:84] Creating CNI manager for ""
	I0429 14:57:00.111815 2066013 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 14:57:00.111906 2066013 start.go:340] cluster config:
	{Name:pause-432914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-432914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-glus
ter:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 14:57:00.115754 2066013 out.go:177] * Starting "pause-432914" primary control-plane node in "pause-432914" cluster
	I0429 14:57:00.119077 2066013 cache.go:121] Beginning downloading kic base image for docker with crio
	I0429 14:57:00.123422 2066013 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 14:57:00.126605 2066013 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 14:57:00.126666 2066013 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 14:57:00.127845 2066013 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4
	I0429 14:57:00.128139 2066013 cache.go:56] Caching tarball of preloaded images
	I0429 14:57:00.128262 2066013 preload.go:173] Found /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0429 14:57:00.128275 2066013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 14:57:00.128422 2066013 profile.go:143] Saving config to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/config.json ...
	I0429 14:57:00.172818 2066013 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 14:57:00.172860 2066013 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 14:57:00.172895 2066013 cache.go:194] Successfully downloaded all kic artifacts
	I0429 14:57:00.172936 2066013 start.go:360] acquireMachinesLock for pause-432914: {Name:mk60e4243217024a35490e9d845b2c689d9870db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 14:57:00.173042 2066013 start.go:364] duration metric: took 74.363µs to acquireMachinesLock for "pause-432914"
	I0429 14:57:00.173071 2066013 start.go:96] Skipping create...Using existing machine configuration
	I0429 14:57:00.173089 2066013 fix.go:54] fixHost starting: 
	I0429 14:57:00.173418 2066013 cli_runner.go:164] Run: docker container inspect pause-432914 --format={{.State.Status}}
	I0429 14:57:00.204183 2066013 fix.go:112] recreateIfNeeded on pause-432914: state=Running err=<nil>
	W0429 14:57:00.204246 2066013 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 14:57:00.207908 2066013 out.go:177] * Updating the running docker "pause-432914" container ...
	I0429 14:57:00.210335 2066013 machine.go:94] provisionDockerMachine start ...
	I0429 14:57:00.210488 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:00.234801 2066013 main.go:141] libmachine: Using SSH client type: native
	I0429 14:57:00.235147 2066013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35297 <nil> <nil>}
	I0429 14:57:00.235168 2066013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 14:57:00.380399 2066013 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-432914
	
	I0429 14:57:00.380429 2066013 ubuntu.go:169] provisioning hostname "pause-432914"
	I0429 14:57:00.380518 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:00.409210 2066013 main.go:141] libmachine: Using SSH client type: native
	I0429 14:57:00.409462 2066013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35297 <nil> <nil>}
	I0429 14:57:00.409488 2066013 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-432914 && echo "pause-432914" | sudo tee /etc/hostname
	I0429 14:57:00.549554 2066013 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-432914
	
	I0429 14:57:00.549707 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:00.567527 2066013 main.go:141] libmachine: Using SSH client type: native
	I0429 14:57:00.567802 2066013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35297 <nil> <nil>}
	I0429 14:57:00.567827 2066013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-432914' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-432914/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-432914' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 14:57:00.692845 2066013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 14:57:00.692871 2066013 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18771-1897267/.minikube CaCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18771-1897267/.minikube}
	I0429 14:57:00.692894 2066013 ubuntu.go:177] setting up certificates
	I0429 14:57:00.692904 2066013 provision.go:84] configureAuth start
	I0429 14:57:00.692964 2066013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-432914
	I0429 14:57:00.709078 2066013 provision.go:143] copyHostCerts
	I0429 14:57:00.709151 2066013 exec_runner.go:144] found /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem, removing ...
	I0429 14:57:00.709165 2066013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem
	I0429 14:57:00.709242 2066013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem (1679 bytes)
	I0429 14:57:00.709343 2066013 exec_runner.go:144] found /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem, removing ...
	I0429 14:57:00.709356 2066013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem
	I0429 14:57:00.709386 2066013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem (1078 bytes)
	I0429 14:57:00.709455 2066013 exec_runner.go:144] found /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem, removing ...
	I0429 14:57:00.709463 2066013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem
	I0429 14:57:00.709487 2066013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem (1123 bytes)
	I0429 14:57:00.709539 2066013 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem org=jenkins.pause-432914 san=[127.0.0.1 192.168.85.2 localhost minikube pause-432914]
	I0429 14:57:01.057253 2066013 provision.go:177] copyRemoteCerts
	I0429 14:57:01.057323 2066013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 14:57:01.057363 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:01.073781 2066013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35297 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/pause-432914/id_rsa Username:docker}
	I0429 14:57:01.178184 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 14:57:01.204914 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0429 14:57:01.230581 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 14:57:01.256104 2066013 provision.go:87] duration metric: took 563.186254ms to configureAuth
	I0429 14:57:01.256130 2066013 ubuntu.go:193] setting minikube options for container-runtime
	I0429 14:57:01.256429 2066013 config.go:182] Loaded profile config "pause-432914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:57:01.256563 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:01.273774 2066013 main.go:141] libmachine: Using SSH client type: native
	I0429 14:57:01.274030 2066013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35297 <nil> <nil>}
	I0429 14:57:01.274051 2066013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 14:57:06.657237 2066013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 14:57:06.657259 2066013 machine.go:97] duration metric: took 6.446894252s to provisionDockerMachine
	I0429 14:57:06.657271 2066013 start.go:293] postStartSetup for "pause-432914" (driver="docker")
	I0429 14:57:06.657283 2066013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 14:57:06.657343 2066013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 14:57:06.657390 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:06.673612 2066013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35297 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/pause-432914/id_rsa Username:docker}
	I0429 14:57:06.769664 2066013 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 14:57:06.772825 2066013 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0429 14:57:06.772860 2066013 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0429 14:57:06.772876 2066013 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0429 14:57:06.772883 2066013 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0429 14:57:06.772895 2066013 filesync.go:126] Scanning /home/jenkins/minikube-integration/18771-1897267/.minikube/addons for local assets ...
	I0429 14:57:06.772950 2066013 filesync.go:126] Scanning /home/jenkins/minikube-integration/18771-1897267/.minikube/files for local assets ...
	I0429 14:57:06.773030 2066013 filesync.go:149] local asset: /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem -> 19026842.pem in /etc/ssl/certs
	I0429 14:57:06.773134 2066013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 14:57:06.781823 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem --> /etc/ssl/certs/19026842.pem (1708 bytes)
	I0429 14:57:06.806194 2066013 start.go:296] duration metric: took 148.908328ms for postStartSetup
	I0429 14:57:06.806300 2066013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 14:57:06.806355 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:06.822552 2066013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35297 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/pause-432914/id_rsa Username:docker}
	I0429 14:57:06.910211 2066013 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 14:57:06.915247 2066013 fix.go:56] duration metric: took 6.742161306s for fixHost
	I0429 14:57:06.915272 2066013 start.go:83] releasing machines lock for "pause-432914", held for 6.74221688s
	I0429 14:57:06.915366 2066013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-432914
	I0429 14:57:06.931459 2066013 ssh_runner.go:195] Run: cat /version.json
	I0429 14:57:06.931518 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:06.931777 2066013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 14:57:06.931829 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:06.949712 2066013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35297 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/pause-432914/id_rsa Username:docker}
	I0429 14:57:06.952361 2066013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35297 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/pause-432914/id_rsa Username:docker}
	I0429 14:57:07.036591 2066013 ssh_runner.go:195] Run: systemctl --version
	I0429 14:57:07.152061 2066013 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 14:57:07.295531 2066013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 14:57:07.300168 2066013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 14:57:07.309257 2066013 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0429 14:57:07.309387 2066013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 14:57:07.318689 2066013 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 14:57:07.318713 2066013 start.go:494] detecting cgroup driver to use...
	I0429 14:57:07.318747 2066013 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0429 14:57:07.318805 2066013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 14:57:07.331792 2066013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 14:57:07.343855 2066013 docker.go:217] disabling cri-docker service (if available) ...
	I0429 14:57:07.343920 2066013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 14:57:07.357152 2066013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 14:57:07.370524 2066013 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 14:57:07.499416 2066013 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 14:57:07.621891 2066013 docker.go:233] disabling docker service ...
	I0429 14:57:07.621964 2066013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 14:57:07.635580 2066013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 14:57:07.647851 2066013 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 14:57:07.762420 2066013 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 14:57:07.888941 2066013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 14:57:07.900859 2066013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 14:57:07.917641 2066013 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 14:57:07.917706 2066013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.927195 2066013 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 14:57:07.927265 2066013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.936976 2066013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.947659 2066013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.957552 2066013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 14:57:07.967046 2066013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.977338 2066013 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.986955 2066013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.998835 2066013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 14:57:08.011130 2066013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 14:57:08.021093 2066013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 14:57:08.162398 2066013 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 14:57:08.346921 2066013 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 14:57:08.346990 2066013 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 14:57:08.353250 2066013 start.go:562] Will wait 60s for crictl version
	I0429 14:57:08.353312 2066013 ssh_runner.go:195] Run: which crictl
	I0429 14:57:08.357948 2066013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 14:57:08.435034 2066013 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0429 14:57:08.435117 2066013 ssh_runner.go:195] Run: crio --version
	I0429 14:57:08.476287 2066013 ssh_runner.go:195] Run: crio --version
	I0429 14:57:08.553392 2066013 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.24.6 ...
	I0429 14:57:08.556106 2066013 cli_runner.go:164] Run: docker network inspect pause-432914 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 14:57:08.578343 2066013 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0429 14:57:08.582508 2066013 kubeadm.go:877] updating cluster {Name:pause-432914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-432914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry
-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 14:57:08.582660 2066013 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 14:57:08.582714 2066013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 14:57:08.643260 2066013 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 14:57:08.643283 2066013 crio.go:433] Images already preloaded, skipping extraction
	I0429 14:57:08.643337 2066013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 14:57:08.703568 2066013 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 14:57:08.703595 2066013 cache_images.go:84] Images are preloaded, skipping loading
	I0429 14:57:08.703604 2066013 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.30.0 crio true true} ...
	I0429 14:57:08.703711 2066013 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-432914 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:pause-432914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 14:57:08.703803 2066013 ssh_runner.go:195] Run: crio config
	I0429 14:57:08.792543 2066013 cni.go:84] Creating CNI manager for ""
	I0429 14:57:08.792562 2066013 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 14:57:08.792578 2066013 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 14:57:08.792600 2066013 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-432914 NodeName:pause-432914 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 14:57:08.792773 2066013 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-432914"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 14:57:08.792850 2066013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 14:57:08.802178 2066013 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 14:57:08.802241 2066013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 14:57:08.810666 2066013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0429 14:57:08.830217 2066013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 14:57:08.849023 2066013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0429 14:57:08.875753 2066013 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0429 14:57:08.880752 2066013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 14:57:09.128182 2066013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 14:57:09.230635 2066013 certs.go:68] Setting up /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914 for IP: 192.168.85.2
	I0429 14:57:09.230654 2066013 certs.go:194] generating shared ca certs ...
	I0429 14:57:09.230682 2066013 certs.go:226] acquiring lock for ca certs: {Name:mk012c6865f9f1625b7bfd5d0280b6707793520e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:57:09.230838 2066013 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key
	I0429 14:57:09.230884 2066013 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key
	I0429 14:57:09.230891 2066013 certs.go:256] generating profile certs ...
	I0429 14:57:09.230977 2066013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/client.key
	I0429 14:57:09.231037 2066013 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/apiserver.key.01cd6b34
	I0429 14:57:09.231074 2066013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/proxy-client.key
	I0429 14:57:09.231175 2066013 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684.pem (1338 bytes)
	W0429 14:57:09.231200 2066013 certs.go:480] ignoring /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684_empty.pem, impossibly tiny 0 bytes
	I0429 14:57:09.231208 2066013 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 14:57:09.231236 2066013 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem (1078 bytes)
	I0429 14:57:09.231258 2066013 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem (1123 bytes)
	I0429 14:57:09.231284 2066013 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem (1679 bytes)
	I0429 14:57:09.231326 2066013 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem (1708 bytes)
	I0429 14:57:09.231935 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 14:57:09.340414 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 14:57:09.398138 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 14:57:09.497027 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 14:57:09.592245 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0429 14:57:09.634082 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 14:57:09.682931 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 14:57:09.730044 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 14:57:09.777792 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 14:57:09.822642 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684.pem --> /usr/share/ca-certificates/1902684.pem (1338 bytes)
	I0429 14:57:09.875788 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem --> /usr/share/ca-certificates/19026842.pem (1708 bytes)
	I0429 14:57:09.925488 2066013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 14:57:09.953926 2066013 ssh_runner.go:195] Run: openssl version
	I0429 14:57:09.969086 2066013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 14:57:09.989799 2066013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:57:10.004602 2066013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 14:07 /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:57:10.004874 2066013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:57:10.021257 2066013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 14:57:10.048197 2066013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1902684.pem && ln -fs /usr/share/ca-certificates/1902684.pem /etc/ssl/certs/1902684.pem"
	I0429 14:57:10.064364 2066013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1902684.pem
	I0429 14:57:10.068571 2066013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 14:18 /usr/share/ca-certificates/1902684.pem
	I0429 14:57:10.068736 2066013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1902684.pem
	I0429 14:57:10.076289 2066013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1902684.pem /etc/ssl/certs/51391683.0"
	I0429 14:57:10.109542 2066013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19026842.pem && ln -fs /usr/share/ca-certificates/19026842.pem /etc/ssl/certs/19026842.pem"
	I0429 14:57:10.135614 2066013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19026842.pem
	I0429 14:57:10.147807 2066013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 14:18 /usr/share/ca-certificates/19026842.pem
	I0429 14:57:10.147922 2066013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19026842.pem
	I0429 14:57:10.163407 2066013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19026842.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 14:57:10.197650 2066013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 14:57:10.207691 2066013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 14:57:10.221662 2066013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 14:57:10.238407 2066013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 14:57:10.254520 2066013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 14:57:10.267199 2066013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 14:57:10.282949 2066013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 14:57:10.295017 2066013 kubeadm.go:391] StartCluster: {Name:pause-432914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-432914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-cr
eds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 14:57:10.295193 2066013 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 14:57:10.295301 2066013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 14:57:10.351014 2066013 cri.go:89] found id: "a345b76233ad4e66cdff7662b867c232d392ed9c5f83d138ad1ebe3237aabca5"
	I0429 14:57:10.351088 2066013 cri.go:89] found id: "bc0f928fe0a658fbe2067b9f43871766f66ec9782c3e5acc32b91270bd624674"
	I0429 14:57:10.351107 2066013 cri.go:89] found id: "1cf12f4f214ef08f0e8c3f0dcb29a53ef72f4992a5dbe6a3df52d0d3751eda67"
	I0429 14:57:10.351122 2066013 cri.go:89] found id: "75470843e8018da0a5e303803151792aa9c26806ed406f16a72c26e9ce798d98"
	I0429 14:57:10.351139 2066013 cri.go:89] found id: "229390072c65722944c9a0fa5cf8515c9bbdeb9d3b6ae5aa6e6fa92958f427cb"
	I0429 14:57:10.351173 2066013 cri.go:89] found id: "cc2a18c92b0145ea5f55e58e03c469864baaf8c8fb578f2f06ca7713c485e56b"
	I0429 14:57:10.351190 2066013 cri.go:89] found id: "e729ee3b8b03da4783e7a9039abe12fa2bf82753e1f70aaf14e2c9a1374a0d71"
	I0429 14:57:10.351208 2066013 cri.go:89] found id: "8b6b1e430c982f19870d1aa416f56a799ba394846813c0552d13c3696f36bf50"
	I0429 14:57:10.351226 2066013 cri.go:89] found id: "0308fcfdc97181969e31209745d6443d2f452bfdd05aceb64ed104406c36e134"
	I0429 14:57:10.351255 2066013 cri.go:89] found id: "35331f4b60b1fc1c99b4f02d3288515822a4de533c83b26ed2aef975beac13e6"
	I0429 14:57:10.351279 2066013 cri.go:89] found id: "960e58a37bfe8df05c93e4c739b85c0523d77bbcadcdac18c09640f06af6c076"
	I0429 14:57:10.351297 2066013 cri.go:89] found id: "d90fc1365e9e01da73afcded913c468886648421a06c72f50e7822767e16769e"
	I0429 14:57:10.351314 2066013 cri.go:89] found id: "69c7e75b4d4f3ef42f8bfa1e90c23b93b7fe66a5cbd4bcead85fb41ed26a968f"
	I0429 14:57:10.351332 2066013 cri.go:89] found id: "133431ffb9b984f5f6320a552799fdcc9af3dc92e0e3b77003ad5820e8d9ba90"
	I0429 14:57:10.351364 2066013 cri.go:89] found id: "234f1bc29ee03822fc891e1eb09cc6c8593ba210feba7bf50c7b6cd9cf576542"
	I0429 14:57:10.351385 2066013 cri.go:89] found id: "a69ea36f1e8a4813b9516c3cd8c741c48a421dbd0fe66fb443e6d657c887230d"
	I0429 14:57:10.351403 2066013 cri.go:89] found id: ""
	I0429 14:57:10.351501 2066013 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-432914
helpers_test.go:235: (dbg) docker inspect pause-432914:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b701e3e8cc37c4a5e8511124c6ebdcaff59f8e6831023426096c99fc754df183",
	        "Created": "2024-04-29T14:56:11.415542922Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2062551,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-29T14:56:11.729215487Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c9315e0f61546d7b9630cf89252fa7f614fc966830e816cca5333df5c944376f",
	        "ResolvConfPath": "/var/lib/docker/containers/b701e3e8cc37c4a5e8511124c6ebdcaff59f8e6831023426096c99fc754df183/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b701e3e8cc37c4a5e8511124c6ebdcaff59f8e6831023426096c99fc754df183/hostname",
	        "HostsPath": "/var/lib/docker/containers/b701e3e8cc37c4a5e8511124c6ebdcaff59f8e6831023426096c99fc754df183/hosts",
	        "LogPath": "/var/lib/docker/containers/b701e3e8cc37c4a5e8511124c6ebdcaff59f8e6831023426096c99fc754df183/b701e3e8cc37c4a5e8511124c6ebdcaff59f8e6831023426096c99fc754df183-json.log",
	        "Name": "/pause-432914",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-432914:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-432914",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/66877a8a0e8c4bd0ab4eb515ad74d2b5ec12575808b65d2e62f7c641be78db98-init/diff:/var/lib/docker/overlay2/f080d6ed1efba2dbfce916f4260b407bf4d9204079d2708eb1c14f6847e489ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/66877a8a0e8c4bd0ab4eb515ad74d2b5ec12575808b65d2e62f7c641be78db98/merged",
	                "UpperDir": "/var/lib/docker/overlay2/66877a8a0e8c4bd0ab4eb515ad74d2b5ec12575808b65d2e62f7c641be78db98/diff",
	                "WorkDir": "/var/lib/docker/overlay2/66877a8a0e8c4bd0ab4eb515ad74d2b5ec12575808b65d2e62f7c641be78db98/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-432914",
	                "Source": "/var/lib/docker/volumes/pause-432914/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-432914",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-432914",
	                "name.minikube.sigs.k8s.io": "pause-432914",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e1de272df34ae3819533bc8524faad8b3ec823b59f4fa38080b7303e68c24856",
	            "SandboxKey": "/var/run/docker/netns/e1de272df34a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35297"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35296"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35293"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35295"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35294"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-432914": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "f3739010c6c2646bef56a873d01c39781ab34d562bf11750c83e972daedc8a30",
	                    "EndpointID": "7894bf8107ffadec6a88ccda14e5fd45e9ee894be6bd4793f135b8e2c613a305",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "pause-432914",
	                        "b701e3e8cc37"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-432914 -n pause-432914
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-432914 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-432914 logs -n 25: (2.231867924s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-991714         | NoKubernetes-991714       | jenkins | v1.33.0 | 29 Apr 24 14:50 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-991714         | NoKubernetes-991714       | jenkins | v1.33.0 | 29 Apr 24 14:50 UTC | 29 Apr 24 14:51 UTC |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-828310      | minikube                  | jenkins | v1.26.0 | 29 Apr 24 14:50 UTC | 29 Apr 24 14:52 UTC |
	|         | --memory=2200 --driver=docker  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-991714         | NoKubernetes-991714       | jenkins | v1.33.0 | 29 Apr 24 14:51 UTC | 29 Apr 24 14:51 UTC |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-991714         | NoKubernetes-991714       | jenkins | v1.33.0 | 29 Apr 24 14:51 UTC | 29 Apr 24 14:51 UTC |
	| start   | -p NoKubernetes-991714         | NoKubernetes-991714       | jenkins | v1.33.0 | 29 Apr 24 14:51 UTC | 29 Apr 24 14:51 UTC |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-991714 sudo    | NoKubernetes-991714       | jenkins | v1.33.0 | 29 Apr 24 14:51 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-991714         | NoKubernetes-991714       | jenkins | v1.33.0 | 29 Apr 24 14:51 UTC | 29 Apr 24 14:51 UTC |
	| start   | -p NoKubernetes-991714         | NoKubernetes-991714       | jenkins | v1.33.0 | 29 Apr 24 14:51 UTC | 29 Apr 24 14:51 UTC |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-991714 sudo    | NoKubernetes-991714       | jenkins | v1.33.0 | 29 Apr 24 14:51 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-991714         | NoKubernetes-991714       | jenkins | v1.33.0 | 29 Apr 24 14:51 UTC | 29 Apr 24 14:51 UTC |
	| start   | -p kubernetes-upgrade-960980   | kubernetes-upgrade-960980 | jenkins | v1.33.0 | 29 Apr 24 14:51 UTC | 29 Apr 24 14:53 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-828310      | missing-upgrade-828310    | jenkins | v1.33.0 | 29 Apr 24 14:52 UTC | 29 Apr 24 14:53 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-960980   | kubernetes-upgrade-960980 | jenkins | v1.33.0 | 29 Apr 24 14:53 UTC | 29 Apr 24 14:53 UTC |
	| start   | -p kubernetes-upgrade-960980   | kubernetes-upgrade-960980 | jenkins | v1.33.0 | 29 Apr 24 14:53 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-828310      | missing-upgrade-828310    | jenkins | v1.33.0 | 29 Apr 24 14:53 UTC | 29 Apr 24 14:53 UTC |
	| start   | -p stopped-upgrade-518259      | minikube                  | jenkins | v1.26.0 | 29 Apr 24 14:53 UTC | 29 Apr 24 14:54 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --vm-driver=docker             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-518259 stop    | minikube                  | jenkins | v1.26.0 | 29 Apr 24 14:54 UTC | 29 Apr 24 14:54 UTC |
	| start   | -p stopped-upgrade-518259      | stopped-upgrade-518259    | jenkins | v1.33.0 | 29 Apr 24 14:54 UTC | 29 Apr 24 14:54 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-518259      | stopped-upgrade-518259    | jenkins | v1.33.0 | 29 Apr 24 14:54 UTC | 29 Apr 24 14:54 UTC |
	| start   | -p running-upgrade-195173      | minikube                  | jenkins | v1.26.0 | 29 Apr 24 14:54 UTC | 29 Apr 24 14:55 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --vm-driver=docker             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-195173      | running-upgrade-195173    | jenkins | v1.33.0 | 29 Apr 24 14:55 UTC | 29 Apr 24 14:56 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-195173      | running-upgrade-195173    | jenkins | v1.33.0 | 29 Apr 24 14:56 UTC | 29 Apr 24 14:56 UTC |
	| start   | -p pause-432914 --memory=2048  | pause-432914              | jenkins | v1.33.0 | 29 Apr 24 14:56 UTC | 29 Apr 24 14:56 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-432914                | pause-432914              | jenkins | v1.33.0 | 29 Apr 24 14:56 UTC | 29 Apr 24 14:57 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 14:56:59
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 14:56:59.856941 2066013 out.go:291] Setting OutFile to fd 1 ...
	I0429 14:56:59.857109 2066013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:56:59.857118 2066013 out.go:304] Setting ErrFile to fd 2...
	I0429 14:56:59.857123 2066013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:56:59.857373 2066013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-1897267/.minikube/bin
	I0429 14:56:59.857746 2066013 out.go:298] Setting JSON to false
	I0429 14:56:59.858751 2066013 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":38364,"bootTime":1714364256,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0429 14:56:59.858833 2066013 start.go:139] virtualization:  
	I0429 14:56:59.863461 2066013 out.go:177] * [pause-432914] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0429 14:56:59.866098 2066013 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 14:56:59.866138 2066013 notify.go:220] Checking for updates...
	I0429 14:56:59.868797 2066013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 14:56:59.870876 2066013 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18771-1897267/kubeconfig
	I0429 14:56:59.873103 2066013 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-1897267/.minikube
	I0429 14:56:59.875290 2066013 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0429 14:56:59.877785 2066013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 14:56:59.880268 2066013 config.go:182] Loaded profile config "pause-432914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:56:59.880950 2066013 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 14:56:59.903091 2066013 docker.go:122] docker version: linux-26.1.0:Docker Engine - Community
	I0429 14:56:59.903212 2066013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:56:59.967278 2066013 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:62 SystemTime:2024-04-29 14:56:59.958094854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:56:59.967378 2066013 docker.go:295] overlay module found
	I0429 14:56:59.969951 2066013 out.go:177] * Using the docker driver based on existing profile
	I0429 14:56:59.971975 2066013 start.go:297] selected driver: docker
	I0429 14:56:59.972003 2066013 start.go:901] validating driver "docker" against &{Name:pause-432914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-432914 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alias
es:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 14:56:59.972120 2066013 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 14:56:59.972213 2066013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:57:00.111129 2066013 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:62 SystemTime:2024-04-29 14:57:00.070054459 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:57:00.111792 2066013 cni.go:84] Creating CNI manager for ""
	I0429 14:57:00.111815 2066013 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 14:57:00.111906 2066013 start.go:340] cluster config:
	{Name:pause-432914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-432914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-glus
ter:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 14:57:00.115754 2066013 out.go:177] * Starting "pause-432914" primary control-plane node in "pause-432914" cluster
	I0429 14:57:00.119077 2066013 cache.go:121] Beginning downloading kic base image for docker with crio
	I0429 14:57:00.123422 2066013 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 14:57:00.126605 2066013 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 14:57:00.126666 2066013 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 14:57:00.127845 2066013 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4
	I0429 14:57:00.128139 2066013 cache.go:56] Caching tarball of preloaded images
	I0429 14:57:00.128262 2066013 preload.go:173] Found /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0429 14:57:00.128275 2066013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 14:57:00.128422 2066013 profile.go:143] Saving config to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/config.json ...
	I0429 14:57:00.172818 2066013 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 14:57:00.172860 2066013 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 14:57:00.172895 2066013 cache.go:194] Successfully downloaded all kic artifacts
	I0429 14:57:00.172936 2066013 start.go:360] acquireMachinesLock for pause-432914: {Name:mk60e4243217024a35490e9d845b2c689d9870db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 14:57:00.173042 2066013 start.go:364] duration metric: took 74.363µs to acquireMachinesLock for "pause-432914"
	I0429 14:57:00.173071 2066013 start.go:96] Skipping create...Using existing machine configuration
	I0429 14:57:00.173089 2066013 fix.go:54] fixHost starting: 
	I0429 14:57:00.173418 2066013 cli_runner.go:164] Run: docker container inspect pause-432914 --format={{.State.Status}}
	I0429 14:57:00.204183 2066013 fix.go:112] recreateIfNeeded on pause-432914: state=Running err=<nil>
	W0429 14:57:00.204246 2066013 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 14:57:00.207908 2066013 out.go:177] * Updating the running docker "pause-432914" container ...
	I0429 14:56:58.153863 2048482 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0429 14:56:58.154274 2048482 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0429 14:56:58.154321 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 14:56:58.154410 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 14:56:58.202882 2048482 cri.go:89] found id: "c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77"
	I0429 14:56:58.202903 2048482 cri.go:89] found id: ""
	I0429 14:56:58.202911 2048482 logs.go:276] 1 containers: [c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77]
	I0429 14:56:58.202967 2048482 ssh_runner.go:195] Run: which crictl
	I0429 14:56:58.206609 2048482 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 14:56:58.206688 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 14:56:58.247456 2048482 cri.go:89] found id: ""
	I0429 14:56:58.247479 2048482 logs.go:276] 0 containers: []
	W0429 14:56:58.247488 2048482 logs.go:278] No container was found matching "etcd"
	I0429 14:56:58.247495 2048482 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 14:56:58.247557 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 14:56:58.284967 2048482 cri.go:89] found id: ""
	I0429 14:56:58.284996 2048482 logs.go:276] 0 containers: []
	W0429 14:56:58.285006 2048482 logs.go:278] No container was found matching "coredns"
	I0429 14:56:58.285013 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 14:56:58.285073 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 14:56:58.326027 2048482 cri.go:89] found id: "84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8"
	I0429 14:56:58.326048 2048482 cri.go:89] found id: ""
	I0429 14:56:58.326057 2048482 logs.go:276] 1 containers: [84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8]
	I0429 14:56:58.326111 2048482 ssh_runner.go:195] Run: which crictl
	I0429 14:56:58.329710 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 14:56:58.329777 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 14:56:58.374241 2048482 cri.go:89] found id: ""
	I0429 14:56:58.374264 2048482 logs.go:276] 0 containers: []
	W0429 14:56:58.374273 2048482 logs.go:278] No container was found matching "kube-proxy"
	I0429 14:56:58.374280 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 14:56:58.374338 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 14:56:58.419236 2048482 cri.go:89] found id: "8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539"
	I0429 14:56:58.419260 2048482 cri.go:89] found id: ""
	I0429 14:56:58.419268 2048482 logs.go:276] 1 containers: [8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539]
	I0429 14:56:58.419326 2048482 ssh_runner.go:195] Run: which crictl
	I0429 14:56:58.423190 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 14:56:58.423262 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 14:56:58.463178 2048482 cri.go:89] found id: ""
	I0429 14:56:58.463200 2048482 logs.go:276] 0 containers: []
	W0429 14:56:58.463211 2048482 logs.go:278] No container was found matching "kindnet"
	I0429 14:56:58.463247 2048482 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 14:56:58.463328 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 14:56:58.501245 2048482 cri.go:89] found id: ""
	I0429 14:56:58.501268 2048482 logs.go:276] 0 containers: []
	W0429 14:56:58.501277 2048482 logs.go:278] No container was found matching "storage-provisioner"
	I0429 14:56:58.501287 2048482 logs.go:123] Gathering logs for kube-apiserver [c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77] ...
	I0429 14:56:58.501299 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77"
	I0429 14:56:58.551684 2048482 logs.go:123] Gathering logs for kube-scheduler [84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8] ...
	I0429 14:56:58.551712 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8"
	I0429 14:56:58.646814 2048482 logs.go:123] Gathering logs for kube-controller-manager [8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539] ...
	I0429 14:56:58.646848 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539"
	I0429 14:56:58.687845 2048482 logs.go:123] Gathering logs for CRI-O ...
	I0429 14:56:58.687917 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 14:56:58.736484 2048482 logs.go:123] Gathering logs for container status ...
	I0429 14:56:58.736521 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 14:56:58.783286 2048482 logs.go:123] Gathering logs for kubelet ...
	I0429 14:56:58.783317 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 14:56:58.898265 2048482 logs.go:123] Gathering logs for dmesg ...
	I0429 14:56:58.898299 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 14:56:58.920434 2048482 logs.go:123] Gathering logs for describe nodes ...
	I0429 14:56:58.920508 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 14:56:59.007465 2048482 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 14:57:00.210335 2066013 machine.go:94] provisionDockerMachine start ...
	I0429 14:57:00.210488 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:00.234801 2066013 main.go:141] libmachine: Using SSH client type: native
	I0429 14:57:00.235147 2066013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35297 <nil> <nil>}
	I0429 14:57:00.235168 2066013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 14:57:00.380399 2066013 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-432914
	
	I0429 14:57:00.380429 2066013 ubuntu.go:169] provisioning hostname "pause-432914"
	I0429 14:57:00.380518 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:00.409210 2066013 main.go:141] libmachine: Using SSH client type: native
	I0429 14:57:00.409462 2066013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35297 <nil> <nil>}
	I0429 14:57:00.409488 2066013 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-432914 && echo "pause-432914" | sudo tee /etc/hostname
	I0429 14:57:00.549554 2066013 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-432914
	
	I0429 14:57:00.549707 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:00.567527 2066013 main.go:141] libmachine: Using SSH client type: native
	I0429 14:57:00.567802 2066013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35297 <nil> <nil>}
	I0429 14:57:00.567827 2066013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-432914' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-432914/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-432914' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 14:57:00.692845 2066013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 14:57:00.692871 2066013 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18771-1897267/.minikube CaCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18771-1897267/.minikube}
	I0429 14:57:00.692894 2066013 ubuntu.go:177] setting up certificates
	I0429 14:57:00.692904 2066013 provision.go:84] configureAuth start
	I0429 14:57:00.692964 2066013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-432914
	I0429 14:57:00.709078 2066013 provision.go:143] copyHostCerts
	I0429 14:57:00.709151 2066013 exec_runner.go:144] found /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem, removing ...
	I0429 14:57:00.709165 2066013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem
	I0429 14:57:00.709242 2066013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem (1679 bytes)
	I0429 14:57:00.709343 2066013 exec_runner.go:144] found /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem, removing ...
	I0429 14:57:00.709356 2066013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem
	I0429 14:57:00.709386 2066013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem (1078 bytes)
	I0429 14:57:00.709455 2066013 exec_runner.go:144] found /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem, removing ...
	I0429 14:57:00.709463 2066013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem
	I0429 14:57:00.709487 2066013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem (1123 bytes)
	I0429 14:57:00.709539 2066013 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem org=jenkins.pause-432914 san=[127.0.0.1 192.168.85.2 localhost minikube pause-432914]
	I0429 14:57:01.057253 2066013 provision.go:177] copyRemoteCerts
	I0429 14:57:01.057323 2066013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 14:57:01.057363 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:01.073781 2066013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35297 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/pause-432914/id_rsa Username:docker}
	I0429 14:57:01.178184 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 14:57:01.204914 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0429 14:57:01.230581 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 14:57:01.256104 2066013 provision.go:87] duration metric: took 563.186254ms to configureAuth
	I0429 14:57:01.256130 2066013 ubuntu.go:193] setting minikube options for container-runtime
	I0429 14:57:01.256429 2066013 config.go:182] Loaded profile config "pause-432914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:57:01.256563 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:01.273774 2066013 main.go:141] libmachine: Using SSH client type: native
	I0429 14:57:01.274030 2066013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35297 <nil> <nil>}
	I0429 14:57:01.274051 2066013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 14:57:01.508572 2048482 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0429 14:57:01.509009 2048482 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0429 14:57:01.509057 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 14:57:01.509122 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 14:57:01.550436 2048482 cri.go:89] found id: "c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77"
	I0429 14:57:01.550458 2048482 cri.go:89] found id: ""
	I0429 14:57:01.550466 2048482 logs.go:276] 1 containers: [c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77]
	I0429 14:57:01.550521 2048482 ssh_runner.go:195] Run: which crictl
	I0429 14:57:01.554166 2048482 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 14:57:01.554238 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 14:57:01.593090 2048482 cri.go:89] found id: ""
	I0429 14:57:01.593113 2048482 logs.go:276] 0 containers: []
	W0429 14:57:01.593122 2048482 logs.go:278] No container was found matching "etcd"
	I0429 14:57:01.593129 2048482 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 14:57:01.593189 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 14:57:01.631043 2048482 cri.go:89] found id: ""
	I0429 14:57:01.631066 2048482 logs.go:276] 0 containers: []
	W0429 14:57:01.631075 2048482 logs.go:278] No container was found matching "coredns"
	I0429 14:57:01.631081 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 14:57:01.631146 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 14:57:01.672037 2048482 cri.go:89] found id: "84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8"
	I0429 14:57:01.672060 2048482 cri.go:89] found id: ""
	I0429 14:57:01.672068 2048482 logs.go:276] 1 containers: [84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8]
	I0429 14:57:01.672123 2048482 ssh_runner.go:195] Run: which crictl
	I0429 14:57:01.675521 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 14:57:01.675588 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 14:57:01.711389 2048482 cri.go:89] found id: ""
	I0429 14:57:01.711412 2048482 logs.go:276] 0 containers: []
	W0429 14:57:01.711421 2048482 logs.go:278] No container was found matching "kube-proxy"
	I0429 14:57:01.711428 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 14:57:01.711486 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 14:57:01.754730 2048482 cri.go:89] found id: "8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539"
	I0429 14:57:01.754751 2048482 cri.go:89] found id: ""
	I0429 14:57:01.754759 2048482 logs.go:276] 1 containers: [8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539]
	I0429 14:57:01.754812 2048482 ssh_runner.go:195] Run: which crictl
	I0429 14:57:01.758277 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 14:57:01.758339 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 14:57:01.795553 2048482 cri.go:89] found id: ""
	I0429 14:57:01.795575 2048482 logs.go:276] 0 containers: []
	W0429 14:57:01.795584 2048482 logs.go:278] No container was found matching "kindnet"
	I0429 14:57:01.795591 2048482 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 14:57:01.795655 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 14:57:01.832198 2048482 cri.go:89] found id: ""
	I0429 14:57:01.832220 2048482 logs.go:276] 0 containers: []
	W0429 14:57:01.832229 2048482 logs.go:278] No container was found matching "storage-provisioner"
	I0429 14:57:01.832238 2048482 logs.go:123] Gathering logs for kubelet ...
	I0429 14:57:01.832249 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 14:57:01.943410 2048482 logs.go:123] Gathering logs for dmesg ...
	I0429 14:57:01.943450 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 14:57:01.962857 2048482 logs.go:123] Gathering logs for describe nodes ...
	I0429 14:57:01.962887 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 14:57:02.035863 2048482 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 14:57:02.035883 2048482 logs.go:123] Gathering logs for kube-apiserver [c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77] ...
	I0429 14:57:02.035896 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77"
	I0429 14:57:02.078597 2048482 logs.go:123] Gathering logs for kube-scheduler [84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8] ...
	I0429 14:57:02.078625 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8"
	I0429 14:57:02.172984 2048482 logs.go:123] Gathering logs for kube-controller-manager [8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539] ...
	I0429 14:57:02.173024 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539"
	I0429 14:57:02.219647 2048482 logs.go:123] Gathering logs for CRI-O ...
	I0429 14:57:02.219672 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 14:57:02.268538 2048482 logs.go:123] Gathering logs for container status ...
	I0429 14:57:02.268572 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 14:57:04.817441 2048482 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0429 14:57:04.817892 2048482 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0429 14:57:04.817939 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 14:57:04.817998 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 14:57:04.856000 2048482 cri.go:89] found id: "c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77"
	I0429 14:57:04.856020 2048482 cri.go:89] found id: ""
	I0429 14:57:04.856028 2048482 logs.go:276] 1 containers: [c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77]
	I0429 14:57:04.856087 2048482 ssh_runner.go:195] Run: which crictl
	I0429 14:57:04.859823 2048482 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 14:57:04.859888 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 14:57:04.896397 2048482 cri.go:89] found id: ""
	I0429 14:57:04.896420 2048482 logs.go:276] 0 containers: []
	W0429 14:57:04.896429 2048482 logs.go:278] No container was found matching "etcd"
	I0429 14:57:04.896435 2048482 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 14:57:04.896494 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 14:57:04.931605 2048482 cri.go:89] found id: ""
	I0429 14:57:04.931629 2048482 logs.go:276] 0 containers: []
	W0429 14:57:04.931638 2048482 logs.go:278] No container was found matching "coredns"
	I0429 14:57:04.931645 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 14:57:04.931702 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 14:57:04.967807 2048482 cri.go:89] found id: "84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8"
	I0429 14:57:04.967828 2048482 cri.go:89] found id: ""
	I0429 14:57:04.967836 2048482 logs.go:276] 1 containers: [84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8]
	I0429 14:57:04.967890 2048482 ssh_runner.go:195] Run: which crictl
	I0429 14:57:04.971274 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 14:57:04.971343 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 14:57:05.011056 2048482 cri.go:89] found id: ""
	I0429 14:57:05.011124 2048482 logs.go:276] 0 containers: []
	W0429 14:57:05.011142 2048482 logs.go:278] No container was found matching "kube-proxy"
	I0429 14:57:05.011150 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 14:57:05.011212 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 14:57:05.049029 2048482 cri.go:89] found id: "8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539"
	I0429 14:57:05.049049 2048482 cri.go:89] found id: ""
	I0429 14:57:05.049057 2048482 logs.go:276] 1 containers: [8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539]
	I0429 14:57:05.049116 2048482 ssh_runner.go:195] Run: which crictl
	I0429 14:57:05.052601 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 14:57:05.052752 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 14:57:05.093105 2048482 cri.go:89] found id: ""
	I0429 14:57:05.093132 2048482 logs.go:276] 0 containers: []
	W0429 14:57:05.093142 2048482 logs.go:278] No container was found matching "kindnet"
	I0429 14:57:05.093149 2048482 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 14:57:05.093214 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 14:57:05.133805 2048482 cri.go:89] found id: ""
	I0429 14:57:05.133830 2048482 logs.go:276] 0 containers: []
	W0429 14:57:05.133840 2048482 logs.go:278] No container was found matching "storage-provisioner"
	I0429 14:57:05.133849 2048482 logs.go:123] Gathering logs for kubelet ...
	I0429 14:57:05.133861 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 14:57:05.245501 2048482 logs.go:123] Gathering logs for dmesg ...
	I0429 14:57:05.245535 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 14:57:05.264490 2048482 logs.go:123] Gathering logs for describe nodes ...
	I0429 14:57:05.264517 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 14:57:05.333638 2048482 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 14:57:05.333656 2048482 logs.go:123] Gathering logs for kube-apiserver [c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77] ...
	I0429 14:57:05.333673 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77"
	I0429 14:57:05.375656 2048482 logs.go:123] Gathering logs for kube-scheduler [84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8] ...
	I0429 14:57:05.375683 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8"
	I0429 14:57:05.466347 2048482 logs.go:123] Gathering logs for kube-controller-manager [8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539] ...
	I0429 14:57:05.466384 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539"
	I0429 14:57:05.507687 2048482 logs.go:123] Gathering logs for CRI-O ...
	I0429 14:57:05.507712 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 14:57:05.553217 2048482 logs.go:123] Gathering logs for container status ...
	I0429 14:57:05.553253 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 14:57:06.657237 2066013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 14:57:06.657259 2066013 machine.go:97] duration metric: took 6.446894252s to provisionDockerMachine
	I0429 14:57:06.657271 2066013 start.go:293] postStartSetup for "pause-432914" (driver="docker")
	I0429 14:57:06.657283 2066013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 14:57:06.657343 2066013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 14:57:06.657390 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:06.673612 2066013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35297 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/pause-432914/id_rsa Username:docker}
	I0429 14:57:06.769664 2066013 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 14:57:06.772825 2066013 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0429 14:57:06.772860 2066013 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0429 14:57:06.772876 2066013 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0429 14:57:06.772883 2066013 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0429 14:57:06.772895 2066013 filesync.go:126] Scanning /home/jenkins/minikube-integration/18771-1897267/.minikube/addons for local assets ...
	I0429 14:57:06.772950 2066013 filesync.go:126] Scanning /home/jenkins/minikube-integration/18771-1897267/.minikube/files for local assets ...
	I0429 14:57:06.773030 2066013 filesync.go:149] local asset: /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem -> 19026842.pem in /etc/ssl/certs
	I0429 14:57:06.773134 2066013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 14:57:06.781823 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem --> /etc/ssl/certs/19026842.pem (1708 bytes)
	I0429 14:57:06.806194 2066013 start.go:296] duration metric: took 148.908328ms for postStartSetup
	I0429 14:57:06.806300 2066013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 14:57:06.806355 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:06.822552 2066013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35297 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/pause-432914/id_rsa Username:docker}
	I0429 14:57:06.910211 2066013 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 14:57:06.915247 2066013 fix.go:56] duration metric: took 6.742161306s for fixHost
	I0429 14:57:06.915272 2066013 start.go:83] releasing machines lock for "pause-432914", held for 6.74221688s
	I0429 14:57:06.915366 2066013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-432914
	I0429 14:57:06.931459 2066013 ssh_runner.go:195] Run: cat /version.json
	I0429 14:57:06.931518 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:06.931777 2066013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 14:57:06.931829 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:06.949712 2066013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35297 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/pause-432914/id_rsa Username:docker}
	I0429 14:57:06.952361 2066013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35297 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/pause-432914/id_rsa Username:docker}
	I0429 14:57:07.036591 2066013 ssh_runner.go:195] Run: systemctl --version
	I0429 14:57:07.152061 2066013 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 14:57:07.295531 2066013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 14:57:07.300168 2066013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 14:57:07.309257 2066013 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0429 14:57:07.309387 2066013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 14:57:07.318689 2066013 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 14:57:07.318713 2066013 start.go:494] detecting cgroup driver to use...
	I0429 14:57:07.318747 2066013 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0429 14:57:07.318805 2066013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 14:57:07.331792 2066013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 14:57:07.343855 2066013 docker.go:217] disabling cri-docker service (if available) ...
	I0429 14:57:07.343920 2066013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 14:57:07.357152 2066013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 14:57:07.370524 2066013 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 14:57:07.499416 2066013 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 14:57:07.621891 2066013 docker.go:233] disabling docker service ...
	I0429 14:57:07.621964 2066013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 14:57:07.635580 2066013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 14:57:07.647851 2066013 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 14:57:07.762420 2066013 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 14:57:07.888941 2066013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 14:57:07.900859 2066013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 14:57:07.917641 2066013 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 14:57:07.917706 2066013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.927195 2066013 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 14:57:07.927265 2066013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.936976 2066013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.947659 2066013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.957552 2066013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 14:57:07.967046 2066013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.977338 2066013 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.986955 2066013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.998835 2066013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 14:57:08.011130 2066013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 14:57:08.021093 2066013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 14:57:08.162398 2066013 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 14:57:08.346921 2066013 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 14:57:08.346990 2066013 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 14:57:08.353250 2066013 start.go:562] Will wait 60s for crictl version
	I0429 14:57:08.353312 2066013 ssh_runner.go:195] Run: which crictl
	I0429 14:57:08.357948 2066013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 14:57:08.435034 2066013 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0429 14:57:08.435117 2066013 ssh_runner.go:195] Run: crio --version
	I0429 14:57:08.476287 2066013 ssh_runner.go:195] Run: crio --version
	I0429 14:57:08.553392 2066013 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.24.6 ...
	I0429 14:57:08.556106 2066013 cli_runner.go:164] Run: docker network inspect pause-432914 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 14:57:08.578343 2066013 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0429 14:57:08.582508 2066013 kubeadm.go:877] updating cluster {Name:pause-432914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-432914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry
-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 14:57:08.582660 2066013 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 14:57:08.582714 2066013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 14:57:08.643260 2066013 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 14:57:08.643283 2066013 crio.go:433] Images already preloaded, skipping extraction
	I0429 14:57:08.643337 2066013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 14:57:08.703568 2066013 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 14:57:08.703595 2066013 cache_images.go:84] Images are preloaded, skipping loading
	I0429 14:57:08.703604 2066013 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.30.0 crio true true} ...
	I0429 14:57:08.703711 2066013 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-432914 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:pause-432914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 14:57:08.703803 2066013 ssh_runner.go:195] Run: crio config
	I0429 14:57:08.792543 2066013 cni.go:84] Creating CNI manager for ""
	I0429 14:57:08.792562 2066013 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 14:57:08.792578 2066013 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 14:57:08.792600 2066013 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-432914 NodeName:pause-432914 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 14:57:08.792773 2066013 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-432914"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 14:57:08.792850 2066013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 14:57:08.802178 2066013 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 14:57:08.802241 2066013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 14:57:08.810666 2066013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0429 14:57:08.830217 2066013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 14:57:08.849023 2066013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0429 14:57:08.875753 2066013 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0429 14:57:08.880752 2066013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 14:57:09.128182 2066013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 14:57:09.230635 2066013 certs.go:68] Setting up /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914 for IP: 192.168.85.2
	I0429 14:57:09.230654 2066013 certs.go:194] generating shared ca certs ...
	I0429 14:57:09.230682 2066013 certs.go:226] acquiring lock for ca certs: {Name:mk012c6865f9f1625b7bfd5d0280b6707793520e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:57:09.230838 2066013 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key
	I0429 14:57:09.230884 2066013 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key
	I0429 14:57:09.230891 2066013 certs.go:256] generating profile certs ...
	I0429 14:57:09.230977 2066013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/client.key
	I0429 14:57:09.231037 2066013 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/apiserver.key.01cd6b34
	I0429 14:57:09.231074 2066013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/proxy-client.key
	I0429 14:57:09.231175 2066013 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684.pem (1338 bytes)
	W0429 14:57:09.231200 2066013 certs.go:480] ignoring /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684_empty.pem, impossibly tiny 0 bytes
	I0429 14:57:09.231208 2066013 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 14:57:09.231236 2066013 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem (1078 bytes)
	I0429 14:57:09.231258 2066013 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem (1123 bytes)
	I0429 14:57:09.231284 2066013 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem (1679 bytes)
	I0429 14:57:09.231326 2066013 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem (1708 bytes)
	I0429 14:57:09.231935 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 14:57:09.340414 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 14:57:09.398138 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 14:57:09.497027 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 14:57:09.592245 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0429 14:57:09.634082 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 14:57:09.682931 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 14:57:09.730044 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 14:57:09.777792 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 14:57:09.822642 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684.pem --> /usr/share/ca-certificates/1902684.pem (1338 bytes)
	I0429 14:57:09.875788 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem --> /usr/share/ca-certificates/19026842.pem (1708 bytes)
	I0429 14:57:09.925488 2066013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 14:57:09.953926 2066013 ssh_runner.go:195] Run: openssl version
	I0429 14:57:09.969086 2066013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 14:57:09.989799 2066013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:57:10.004602 2066013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 14:07 /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:57:10.004874 2066013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:57:10.021257 2066013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 14:57:10.048197 2066013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1902684.pem && ln -fs /usr/share/ca-certificates/1902684.pem /etc/ssl/certs/1902684.pem"
	I0429 14:57:10.064364 2066013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1902684.pem
	I0429 14:57:10.068571 2066013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 14:18 /usr/share/ca-certificates/1902684.pem
	I0429 14:57:10.068736 2066013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1902684.pem
	I0429 14:57:10.076289 2066013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1902684.pem /etc/ssl/certs/51391683.0"
	I0429 14:57:10.109542 2066013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19026842.pem && ln -fs /usr/share/ca-certificates/19026842.pem /etc/ssl/certs/19026842.pem"
	I0429 14:57:10.135614 2066013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19026842.pem
	I0429 14:57:10.147807 2066013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 14:18 /usr/share/ca-certificates/19026842.pem
	I0429 14:57:10.147922 2066013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19026842.pem
	I0429 14:57:10.163407 2066013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19026842.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 14:57:10.197650 2066013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 14:57:10.207691 2066013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 14:57:10.221662 2066013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 14:57:10.238407 2066013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 14:57:10.254520 2066013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 14:57:10.267199 2066013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 14:57:10.282949 2066013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 14:57:10.295017 2066013 kubeadm.go:391] StartCluster: {Name:pause-432914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-432914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-cr
eds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 14:57:10.295193 2066013 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 14:57:10.295301 2066013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 14:57:10.351014 2066013 cri.go:89] found id: "a345b76233ad4e66cdff7662b867c232d392ed9c5f83d138ad1ebe3237aabca5"
	I0429 14:57:10.351088 2066013 cri.go:89] found id: "bc0f928fe0a658fbe2067b9f43871766f66ec9782c3e5acc32b91270bd624674"
	I0429 14:57:10.351107 2066013 cri.go:89] found id: "1cf12f4f214ef08f0e8c3f0dcb29a53ef72f4992a5dbe6a3df52d0d3751eda67"
	I0429 14:57:10.351122 2066013 cri.go:89] found id: "75470843e8018da0a5e303803151792aa9c26806ed406f16a72c26e9ce798d98"
	I0429 14:57:10.351139 2066013 cri.go:89] found id: "229390072c65722944c9a0fa5cf8515c9bbdeb9d3b6ae5aa6e6fa92958f427cb"
	I0429 14:57:10.351173 2066013 cri.go:89] found id: "cc2a18c92b0145ea5f55e58e03c469864baaf8c8fb578f2f06ca7713c485e56b"
	I0429 14:57:10.351190 2066013 cri.go:89] found id: "e729ee3b8b03da4783e7a9039abe12fa2bf82753e1f70aaf14e2c9a1374a0d71"
	I0429 14:57:10.351208 2066013 cri.go:89] found id: "8b6b1e430c982f19870d1aa416f56a799ba394846813c0552d13c3696f36bf50"
	I0429 14:57:10.351226 2066013 cri.go:89] found id: "0308fcfdc97181969e31209745d6443d2f452bfdd05aceb64ed104406c36e134"
	I0429 14:57:10.351255 2066013 cri.go:89] found id: "35331f4b60b1fc1c99b4f02d3288515822a4de533c83b26ed2aef975beac13e6"
	I0429 14:57:10.351279 2066013 cri.go:89] found id: "960e58a37bfe8df05c93e4c739b85c0523d77bbcadcdac18c09640f06af6c076"
	I0429 14:57:10.351297 2066013 cri.go:89] found id: "d90fc1365e9e01da73afcded913c468886648421a06c72f50e7822767e16769e"
	I0429 14:57:10.351314 2066013 cri.go:89] found id: "69c7e75b4d4f3ef42f8bfa1e90c23b93b7fe66a5cbd4bcead85fb41ed26a968f"
	I0429 14:57:10.351332 2066013 cri.go:89] found id: "133431ffb9b984f5f6320a552799fdcc9af3dc92e0e3b77003ad5820e8d9ba90"
	I0429 14:57:10.351364 2066013 cri.go:89] found id: "234f1bc29ee03822fc891e1eb09cc6c8593ba210feba7bf50c7b6cd9cf576542"
	I0429 14:57:10.351385 2066013 cri.go:89] found id: "a69ea36f1e8a4813b9516c3cd8c741c48a421dbd0fe66fb443e6d657c887230d"
	I0429 14:57:10.351403 2066013 cri.go:89] found id: ""
	I0429 14:57:10.351501 2066013 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.453379839Z" level=info msg="Starting container: bc0f928fe0a658fbe2067b9f43871766f66ec9782c3e5acc32b91270bd624674" id=b1b9e2b4-bf76-4cce-805b-fae463d57be4 name=/runtime.v1.RuntimeService/StartContainer
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.467035158Z" level=info msg="Started container" PID=2787 containerID=75470843e8018da0a5e303803151792aa9c26806ed406f16a72c26e9ce798d98 description=kube-system/coredns-7db6d8ff4d-p5vp6/coredns id=e689b3d4-5e4d-45fe-bd07-ce78ef92f89f name=/runtime.v1.RuntimeService/StartContainer sandboxID=f443483c3aa22c4b8127fb8365af21dee350a4a7527ce7e57746506db5c68099
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.487146486Z" level=info msg="Started container" PID=2724 containerID=cc2a18c92b0145ea5f55e58e03c469864baaf8c8fb578f2f06ca7713c485e56b description=kube-system/kube-apiserver-pause-432914/kube-apiserver id=b1d4af90-942f-456d-a2bd-6d67ec5bd3f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=93c46304ef8cd9a1cc8cadd3efafee4dc9ae4243c1d7f0dc9d012c7ee4e239ab
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.488621664Z" level=info msg="Created container a345b76233ad4e66cdff7662b867c232d392ed9c5f83d138ad1ebe3237aabca5: kube-system/coredns-7db6d8ff4d-74wxc/coredns" id=841e19b7-795c-47e8-a277-4e0ef458f8d3 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.489265773Z" level=info msg="Starting container: a345b76233ad4e66cdff7662b867c232d392ed9c5f83d138ad1ebe3237aabca5" id=6985fea4-9ed4-41c5-b0b5-f6f513f26a8b name=/runtime.v1.RuntimeService/StartContainer
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.507932487Z" level=info msg="Created container 8b6b1e430c982f19870d1aa416f56a799ba394846813c0552d13c3696f36bf50: kube-system/etcd-pause-432914/etcd" id=573c260d-4a53-44aa-ae7e-e43057a86411 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.508600439Z" level=info msg="Starting container: 8b6b1e430c982f19870d1aa416f56a799ba394846813c0552d13c3696f36bf50" id=57bf36f8-c234-4990-9d76-87eb6495865b name=/runtime.v1.RuntimeService/StartContainer
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.513431454Z" level=info msg="Created container 229390072c65722944c9a0fa5cf8515c9bbdeb9d3b6ae5aa6e6fa92958f427cb: kube-system/kube-proxy-5djxx/kube-proxy" id=bc2179fe-52dc-40b2-989d-c8fafa0c134a name=/runtime.v1.RuntimeService/CreateContainer
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.514007238Z" level=info msg="Starting container: 229390072c65722944c9a0fa5cf8515c9bbdeb9d3b6ae5aa6e6fa92958f427cb" id=80c81951-ef62-47fd-87b5-bce672b0c496 name=/runtime.v1.RuntimeService/StartContainer
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.521577303Z" level=info msg="Started container" PID=2784 containerID=bc0f928fe0a658fbe2067b9f43871766f66ec9782c3e5acc32b91270bd624674 description=kube-system/kube-scheduler-pause-432914/kube-scheduler id=b1b9e2b4-bf76-4cce-805b-fae463d57be4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b62f440c2c9645a9e7168d508a4b6ea8ba807b553b7a88ca15a70be4266f0603
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.530456221Z" level=info msg="Started container" PID=2822 containerID=a345b76233ad4e66cdff7662b867c232d392ed9c5f83d138ad1ebe3237aabca5 description=kube-system/coredns-7db6d8ff4d-74wxc/coredns id=6985fea4-9ed4-41c5-b0b5-f6f513f26a8b name=/runtime.v1.RuntimeService/StartContainer sandboxID=bb91810ea9d82192a39cfa75e13178e50cd13772cb96f3655bb8267b023e674c
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.537761155Z" level=info msg="Started container" PID=2726 containerID=229390072c65722944c9a0fa5cf8515c9bbdeb9d3b6ae5aa6e6fa92958f427cb description=kube-system/kube-proxy-5djxx/kube-proxy id=80c81951-ef62-47fd-87b5-bce672b0c496 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5badd92100177ef017e1410375de5f741265a345f120b1bf3ed4c1c9b6413bb5
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.539790374Z" level=info msg="Started container" PID=2694 containerID=8b6b1e430c982f19870d1aa416f56a799ba394846813c0552d13c3696f36bf50 description=kube-system/etcd-pause-432914/etcd id=57bf36f8-c234-4990-9d76-87eb6495865b name=/runtime.v1.RuntimeService/StartContainer sandboxID=dcdd0ca1bdaead53078f16252e630c04903d51b0daf5affcbe645d8c14c8d578
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.624802957Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.641064806Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.641099612Z" level=info msg="Updated default CNI network name to kindnet"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.641123070Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.659043867Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.659077647Z" level=info msg="Updated default CNI network name to kindnet"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.659093606Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.678426024Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.678467444Z" level=info msg="Updated default CNI network name to kindnet"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.678496998Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.702825501Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.702860069Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a345b76233ad4       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   20 seconds ago      Running             coredns                   1                   bb91810ea9d82       coredns-7db6d8ff4d-74wxc
	bc0f928fe0a65       547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a   20 seconds ago      Running             kube-scheduler            1                   b62f440c2c964       kube-scheduler-pause-432914
	1cf12f4f214ef       68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1   20 seconds ago      Running             kube-controller-manager   1                   2f0726093feba       kube-controller-manager-pause-432914
	75470843e8018       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   20 seconds ago      Running             coredns                   1                   f443483c3aa22       coredns-7db6d8ff4d-p5vp6
	229390072c657       cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f   20 seconds ago      Running             kube-proxy                1                   5badd92100177       kube-proxy-5djxx
	cc2a18c92b014       181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb   20 seconds ago      Running             kube-apiserver            1                   93c46304ef8cd       kube-apiserver-pause-432914
	e729ee3b8b03d       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d   20 seconds ago      Running             kindnet-cni               1                   b0429fa604682       kindnet-lw2xg
	8b6b1e430c982       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd   20 seconds ago      Running             etcd                      1                   dcdd0ca1bdaea       etcd-pause-432914
	0308fcfdc9718       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   32 seconds ago      Exited              coredns                   0                   f443483c3aa22       coredns-7db6d8ff4d-p5vp6
	35331f4b60b1f       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   32 seconds ago      Exited              coredns                   0                   bb91810ea9d82       coredns-7db6d8ff4d-74wxc
	960e58a37bfe8       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d   35 seconds ago      Exited              kindnet-cni               0                   b0429fa604682       kindnet-lw2xg
	d90fc1365e9e0       cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f   35 seconds ago      Exited              kube-proxy                0                   5badd92100177       kube-proxy-5djxx
	69c7e75b4d4f3       181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb   59 seconds ago      Exited              kube-apiserver            0                   93c46304ef8cd       kube-apiserver-pause-432914
	133431ffb9b98       547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a   59 seconds ago      Exited              kube-scheduler            0                   b62f440c2c964       kube-scheduler-pause-432914
	234f1bc29ee03       68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1   59 seconds ago      Exited              kube-controller-manager   0                   2f0726093feba       kube-controller-manager-pause-432914
	a69ea36f1e8a4       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd   59 seconds ago      Exited              etcd                      0                   dcdd0ca1bdaea       etcd-pause-432914
	
	
	==> coredns [0308fcfdc97181969e31209745d6443d2f452bfdd05aceb64ed104406c36e134] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34030 - 9411 "HINFO IN 4244158090950760647.4628995706976055698. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020159706s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [35331f4b60b1fc1c99b4f02d3288515822a4de533c83b26ed2aef975beac13e6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39358 - 8965 "HINFO IN 5143244625229942013.258205165521647658. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021697087s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [75470843e8018da0a5e303803151792aa9c26806ed406f16a72c26e9ce798d98] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36116 - 2937 "HINFO IN 994152511160688042.639342022059642520. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.013200559s
	
	
	==> coredns [a345b76233ad4e66cdff7662b867c232d392ed9c5f83d138ad1ebe3237aabca5] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39744 - 14407 "HINFO IN 6128929205827398344.4425042968035430364. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02225628s
	
	
	==> describe nodes <==
	Name:               pause-432914
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-432914
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=94a01df0d4b48636d4af3d06a53be687e06c0844
	                    minikube.k8s.io/name=pause-432914
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T14_56_38_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 14:56:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-432914
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 14:57:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 14:56:55 +0000   Mon, 29 Apr 2024 14:56:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 14:56:55 +0000   Mon, 29 Apr 2024 14:56:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 14:56:55 +0000   Mon, 29 Apr 2024 14:56:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 14:56:55 +0000   Mon, 29 Apr 2024 14:56:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-432914
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 ea8106e7a72e4845a4ac0fc4d3c44457
	  System UUID:                780f4676-432b-49c8-bedc-bb93c40e086a
	  Boot ID:                    b8f2360a-0b19-4e04-aa8c-604719eae8f1
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-74wxc                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     37s
	  kube-system                 coredns-7db6d8ff4d-p5vp6                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     37s
	  kube-system                 etcd-pause-432914                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         52s
	  kube-system                 kindnet-lw2xg                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      38s
	  kube-system                 kube-apiserver-pause-432914             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kube-controller-manager-pause-432914    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kube-proxy-5djxx                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 kube-scheduler-pause-432914             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             290Mi (3%!)(MISSING)  390Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 35s                kube-proxy       
	  Normal  Starting                 15s                kube-proxy       
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node pause-432914 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node pause-432914 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node pause-432914 status is now: NodeHasSufficientPID
	  Normal  Starting                 53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s                kubelet          Node pause-432914 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s                kubelet          Node pause-432914 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s                kubelet          Node pause-432914 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                node-controller  Node pause-432914 event: Registered Node pause-432914 in Controller
	  Normal  NodeReady                35s                kubelet          Node pause-432914 status is now: NodeReady
	  Normal  RegisteredNode           4s                 node-controller  Node pause-432914 event: Registered Node pause-432914 in Controller
	
	
	==> dmesg <==
	[  +0.001039] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000c85fb157
	[  +0.001211] FS-Cache: N-key=[8] 'ef445c0100000000'
	[  +0.002692] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=00000072 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001189] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=0000000050015021
	[  +0.001188] FS-Cache: O-key=[8] 'ef445c0100000000'
	[  +0.000776] FS-Cache: N-cookie c=00000079 [p=0000006f fl=2 nc=0 na=1]
	[  +0.000979] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=000000008e2f8fe5
	[  +0.001268] FS-Cache: N-key=[8] 'ef445c0100000000'
	[  +3.179364] FS-Cache: Duplicate cookie detected
	[  +0.000801] FS-Cache: O-cookie c=00000070 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001026] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=00000000c9ff6823
	[  +0.001183] FS-Cache: O-key=[8] 'ee445c0100000000'
	[  +0.000813] FS-Cache: N-cookie c=0000007b [p=0000006f fl=2 nc=0 na=1]
	[  +0.000953] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000c85fb157
	[  +0.001094] FS-Cache: N-key=[8] 'ee445c0100000000'
	[  +0.286519] FS-Cache: Duplicate cookie detected
	[  +0.000750] FS-Cache: O-cookie c=00000075 [p=0000006f fl=226 nc=0 na=1]
	[  +0.000964] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=00000000adc83d13
	[  +0.001065] FS-Cache: O-key=[8] 'f4445c0100000000'
	[  +0.000785] FS-Cache: N-cookie c=0000007c [p=0000006f fl=2 nc=0 na=1]
	[  +0.000978] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000652a63b0
	[  +0.001125] FS-Cache: N-key=[8] 'f4445c0100000000'
	[Apr29 14:55] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	[  +0.628253] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [8b6b1e430c982f19870d1aa416f56a799ba394846813c0552d13c3696f36bf50] <==
	{"level":"info","ts":"2024-04-29T14:57:09.805081Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T14:57:09.805105Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T14:57:09.805312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2024-04-29T14:57:09.805378Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2024-04-29T14:57:09.805485Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T14:57:09.805516Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T14:57:09.819193Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T14:57:09.819407Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T14:57:09.819435Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T14:57:09.819556Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-04-29T14:57:09.81957Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-04-29T14:57:11.476705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T14:57:11.476821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T14:57:11.476874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-04-29T14:57:11.476915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T14:57:11.47695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2024-04-29T14:57:11.476986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2024-04-29T14:57:11.477022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2024-04-29T14:57:11.480938Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-432914 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T14:57:11.48114Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T14:57:11.48141Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T14:57:11.484326Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2024-04-29T14:57:11.493889Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T14:57:11.494231Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T14:57:11.495516Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [a69ea36f1e8a4813b9516c3cd8c741c48a421dbd0fe66fb443e6d657c887230d] <==
	{"level":"info","ts":"2024-04-29T14:56:31.180818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2024-04-29T14:56:31.180825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-04-29T14:56:31.180835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2024-04-29T14:56:31.180843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-04-29T14:56:31.184787Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T14:56:31.187484Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-432914 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T14:56:31.187629Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T14:56:31.187936Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T14:56:31.188627Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T14:56:31.188656Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T14:56:31.190189Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T14:56:31.190318Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T14:56:31.190373Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T14:56:31.191831Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2024-04-29T14:56:31.205666Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T14:57:01.422767Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-29T14:57:01.422898Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-432914","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"warn","ts":"2024-04-29T14:57:01.422976Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T14:57:01.42306Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T14:57:01.469679Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T14:57:01.469737Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-29T14:57:01.469812Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2024-04-29T14:57:01.47284Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-04-29T14:57:01.472981Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-04-29T14:57:01.472997Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-432914","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 14:57:30 up 10:39,  0 users,  load average: 2.82, 2.86, 2.37
	Linux pause-432914 5.15.0-1058-aws #64~20.04.1-Ubuntu SMP Tue Apr 9 11:11:55 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [960e58a37bfe8df05c93e4c739b85c0523d77bbcadcdac18c09640f06af6c076] <==
	I0429 14:56:54.316933       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0429 14:56:54.317007       1 main.go:107] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0429 14:56:54.317112       1 main.go:116] setting mtu 1500 for CNI 
	I0429 14:56:54.317122       1 main.go:146] kindnetd IP family: "ipv4"
	I0429 14:56:54.317131       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0429 14:56:54.715599       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0429 14:56:54.715632       1 main.go:227] handling current node
	
	
	==> kindnet [e729ee3b8b03da4783e7a9039abe12fa2bf82753e1f70aaf14e2c9a1374a0d71] <==
	I0429 14:57:09.497125       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0429 14:57:09.501440       1 main.go:107] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0429 14:57:09.501763       1 main.go:116] setting mtu 1500 for CNI 
	I0429 14:57:09.503653       1 main.go:146] kindnetd IP family: "ipv4"
	I0429 14:57:09.503683       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0429 14:57:09.741390       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0429 14:57:09.741698       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0429 14:57:14.624426       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0429 14:57:14.624538       1 main.go:227] handling current node
	I0429 14:57:24.639315       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0429 14:57:24.639342       1 main.go:227] handling current node
	
	
	==> kube-apiserver [69c7e75b4d4f3ef42f8bfa1e90c23b93b7fe66a5cbd4bcead85fb41ed26a968f] <==
	W0429 14:57:01.458329       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458362       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458390       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458422       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458454       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458480       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458513       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458649       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458672       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458693       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458714       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458736       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458764       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0429 14:57:01.458834       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 14:57:01.458959       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0429 14:57:01.458985       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0429 14:57:01.459030       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0429 14:57:01.462460       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0429 14:57:01.459068       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0429 14:57:01.463046       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0429 14:57:01.463178       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0429 14:57:01.463672       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0429 14:57:01.464906       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0429 14:57:01.469943       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0429 14:57:01.470058       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	
	
	==> kube-apiserver [cc2a18c92b0145ea5f55e58e03c469864baaf8c8fb578f2f06ca7713c485e56b] <==
	I0429 14:57:14.311826       1 naming_controller.go:291] Starting NamingConditionController
	I0429 14:57:14.311835       1 establishing_controller.go:76] Starting EstablishingController
	I0429 14:57:14.311845       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0429 14:57:14.311853       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0429 14:57:14.311860       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0429 14:57:14.312129       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0429 14:57:14.519782       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0429 14:57:14.548559       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 14:57:14.548610       1 policy_source.go:224] refreshing policies
	I0429 14:57:14.552384       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 14:57:14.554689       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 14:57:14.621042       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 14:57:14.631549       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 14:57:14.631613       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 14:57:14.631621       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 14:57:14.631635       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 14:57:14.631692       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 14:57:14.636184       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 14:57:14.638030       1 aggregator.go:165] initial CRD sync complete...
	I0429 14:57:14.638142       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 14:57:14.638177       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 14:57:14.638208       1 cache.go:39] Caches are synced for autoregister controller
	I0429 14:57:14.666468       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0429 14:57:14.689374       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0429 14:57:15.311482       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	
	
	==> kube-controller-manager [1cf12f4f214ef08f0e8c3f0dcb29a53ef72f4992a5dbe6a3df52d0d3751eda67] <==
	I0429 14:57:26.839587       1 shared_informer.go:320] Caches are synced for disruption
	I0429 14:57:26.839601       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0429 14:57:26.839610       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0429 14:57:26.840292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="79.204µs"
	I0429 14:57:26.839620       1 shared_informer.go:320] Caches are synced for persistent volume
	I0429 14:57:26.840593       1 shared_informer.go:320] Caches are synced for job
	I0429 14:57:26.845338       1 shared_informer.go:320] Caches are synced for stateful set
	I0429 14:57:26.858459       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0429 14:57:26.859581       1 shared_informer.go:320] Caches are synced for taint
	I0429 14:57:26.859678       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0429 14:57:26.859767       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-432914"
	I0429 14:57:26.859827       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0429 14:57:26.860353       1 shared_informer.go:320] Caches are synced for GC
	I0429 14:57:26.861429       1 shared_informer.go:320] Caches are synced for daemon sets
	I0429 14:57:26.862802       1 shared_informer.go:320] Caches are synced for deployment
	I0429 14:57:26.890402       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0429 14:57:26.892797       1 shared_informer.go:320] Caches are synced for crt configmap
	I0429 14:57:26.949561       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 14:57:26.952853       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0429 14:57:26.958228       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0429 14:57:26.970710       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 14:57:26.987975       1 shared_informer.go:320] Caches are synced for endpoint
	I0429 14:57:27.401749       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 14:57:27.401880       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 14:57:27.407926       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [234f1bc29ee03822fc891e1eb09cc6c8593ba210feba7bf50c7b6cd9cf576542] <==
	I0429 14:56:51.948435       1 shared_informer.go:320] Caches are synced for crt configmap
	I0429 14:56:51.954563       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0429 14:56:51.964208       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="pause-432914" podCIDRs=["10.244.0.0/24"]
	I0429 14:56:51.980763       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 14:56:52.020804       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0429 14:56:52.029041       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 14:56:52.047040       1 shared_informer.go:320] Caches are synced for endpoint
	I0429 14:56:52.124553       1 shared_informer.go:320] Caches are synced for persistent volume
	I0429 14:56:52.551751       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 14:56:52.575867       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 14:56:52.575901       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 14:56:53.054735       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="352.233705ms"
	I0429 14:56:53.098596       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.811289ms"
	I0429 14:56:53.126449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.796798ms"
	I0429 14:56:53.126591       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.383µs"
	I0429 14:56:55.088449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.464µs"
	I0429 14:56:55.126359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.218µs"
	I0429 14:56:56.280593       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.99µs"
	I0429 14:56:56.291726       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.152µs"
	I0429 14:56:56.861019       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 14:56:58.017636       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.308µs"
	I0429 14:56:58.051822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.141736ms"
	I0429 14:56:58.052008       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.968µs"
	I0429 14:56:58.079773       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.489699ms"
	I0429 14:56:58.079962       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.853µs"
	
	
	==> kube-proxy [229390072c65722944c9a0fa5cf8515c9bbdeb9d3b6ae5aa6e6fa92958f427cb] <==
	I0429 14:57:09.911768       1 server_linux.go:69] "Using iptables proxy"
	I0429 14:57:14.644579       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.85.2"]
	I0429 14:57:14.731703       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0429 14:57:14.731836       1 server_linux.go:165] "Using iptables Proxier"
	I0429 14:57:14.735997       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0429 14:57:14.736089       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0429 14:57:14.736146       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 14:57:14.736375       1 server.go:872] "Version info" version="v1.30.0"
	I0429 14:57:14.736616       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 14:57:14.737725       1 config.go:192] "Starting service config controller"
	I0429 14:57:14.737790       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 14:57:14.737870       1 config.go:101] "Starting endpoint slice config controller"
	I0429 14:57:14.737900       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 14:57:14.738422       1 config.go:319] "Starting node config controller"
	I0429 14:57:14.738472       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 14:57:14.840725       1 shared_informer.go:320] Caches are synced for service config
	I0429 14:57:14.841176       1 shared_informer.go:320] Caches are synced for node config
	I0429 14:57:14.841282       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d90fc1365e9e01da73afcded913c468886648421a06c72f50e7822767e16769e] <==
	I0429 14:56:54.259447       1 server_linux.go:69] "Using iptables proxy"
	I0429 14:56:54.281182       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.85.2"]
	I0429 14:56:54.306637       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0429 14:56:54.306689       1 server_linux.go:165] "Using iptables Proxier"
	I0429 14:56:54.308311       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0429 14:56:54.308337       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0429 14:56:54.308388       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 14:56:54.309105       1 server.go:872] "Version info" version="v1.30.0"
	I0429 14:56:54.309233       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 14:56:54.311852       1 config.go:192] "Starting service config controller"
	I0429 14:56:54.311876       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 14:56:54.311982       1 config.go:101] "Starting endpoint slice config controller"
	I0429 14:56:54.311993       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 14:56:54.316855       1 config.go:319] "Starting node config controller"
	I0429 14:56:54.316949       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 14:56:54.412050       1 shared_informer.go:320] Caches are synced for service config
	I0429 14:56:54.412727       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 14:56:54.417943       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [133431ffb9b984f5f6320a552799fdcc9af3dc92e0e3b77003ad5820e8d9ba90] <==
	W0429 14:56:35.489650       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 14:56:35.490508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 14:56:35.501799       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 14:56:35.501853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 14:56:35.501948       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 14:56:35.501968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 14:56:35.502021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 14:56:35.502035       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 14:56:35.502070       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 14:56:35.502244       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 14:56:35.502069       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 14:56:35.502325       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 14:56:35.502410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 14:56:35.502539       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 14:56:35.502451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 14:56:35.502633       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 14:56:35.502504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 14:56:35.502713       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 14:56:36.412006       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 14:56:36.412044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 14:56:36.536941       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 14:56:36.537063       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0429 14:56:38.452005       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 14:57:01.418069       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0429 14:57:01.418102       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bc0f928fe0a658fbe2067b9f43871766f66ec9782c3e5acc32b91270bd624674] <==
	I0429 14:57:11.465413       1 serving.go:380] Generated self-signed cert in-memory
	W0429 14:57:14.537131       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0429 14:57:14.537249       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 14:57:14.537285       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 14:57:14.537335       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 14:57:14.570972       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 14:57:14.571010       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 14:57:14.578800       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 14:57:14.578835       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 14:57:14.579594       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 14:57:14.579659       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 14:57:14.679365       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.053427    1543 status_manager.go:853] "Failed to get status for pod" podUID="27f2ec7fc84ebf646bd9a11699ffe034" pod="kube-system/kube-apiserver-pause-432914" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-432914\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.055349    1543 scope.go:117] "RemoveContainer" containerID="234f1bc29ee03822fc891e1eb09cc6c8593ba210feba7bf50c7b6cd9cf576542"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.056352    1543 status_manager.go:853] "Failed to get status for pod" podUID="ab286b5f-0748-42cd-870f-9d418a303bb1" pod="kube-system/kindnet-lw2xg" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-lw2xg\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.056602    1543 status_manager.go:853] "Failed to get status for pod" podUID="00af55d4-66ab-41c4-912b-3ff241e5cfaf" pod="kube-system/coredns-7db6d8ff4d-p5vp6" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p5vp6\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.056828    1543 status_manager.go:853] "Failed to get status for pod" podUID="d0e8083a-e394-42f9-8d90-d8b339a5093b" pod="kube-system/coredns-7db6d8ff4d-74wxc" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-74wxc\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.057062    1543 status_manager.go:853] "Failed to get status for pod" podUID="58991698cce5216594c83d7edf13102b" pod="kube-system/etcd-pause-432914" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-432914\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.057304    1543 status_manager.go:853] "Failed to get status for pod" podUID="27f2ec7fc84ebf646bd9a11699ffe034" pod="kube-system/kube-apiserver-pause-432914" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-432914\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.057580    1543 status_manager.go:853] "Failed to get status for pod" podUID="b8dc4b04e2d48e66300a95afc3ca2e55" pod="kube-system/kube-controller-manager-pause-432914" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-432914\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.057816    1543 status_manager.go:853] "Failed to get status for pod" podUID="36ae885b-82fa-4dd5-b824-77da146dc101" pod="kube-system/kube-proxy-5djxx" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5djxx\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.063851    1543 scope.go:117] "RemoveContainer" containerID="133431ffb9b984f5f6320a552799fdcc9af3dc92e0e3b77003ad5820e8d9ba90"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.068243    1543 status_manager.go:853] "Failed to get status for pod" podUID="ab286b5f-0748-42cd-870f-9d418a303bb1" pod="kube-system/kindnet-lw2xg" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-lw2xg\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.068651    1543 status_manager.go:853] "Failed to get status for pod" podUID="00af55d4-66ab-41c4-912b-3ff241e5cfaf" pod="kube-system/coredns-7db6d8ff4d-p5vp6" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p5vp6\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.069922    1543 status_manager.go:853] "Failed to get status for pod" podUID="d0e8083a-e394-42f9-8d90-d8b339a5093b" pod="kube-system/coredns-7db6d8ff4d-74wxc" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-74wxc\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.070229    1543 status_manager.go:853] "Failed to get status for pod" podUID="5601acafba42956e33b12392b14c4254" pod="kube-system/kube-scheduler-pause-432914" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-432914\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.070459    1543 status_manager.go:853] "Failed to get status for pod" podUID="58991698cce5216594c83d7edf13102b" pod="kube-system/etcd-pause-432914" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-432914\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.077127    1543 status_manager.go:853] "Failed to get status for pod" podUID="27f2ec7fc84ebf646bd9a11699ffe034" pod="kube-system/kube-apiserver-pause-432914" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-432914\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.077466    1543 status_manager.go:853] "Failed to get status for pod" podUID="b8dc4b04e2d48e66300a95afc3ca2e55" pod="kube-system/kube-controller-manager-pause-432914" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-432914\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.077701    1543 status_manager.go:853] "Failed to get status for pod" podUID="36ae885b-82fa-4dd5-b824-77da146dc101" pod="kube-system/kube-proxy-5djxx" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5djxx\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: E0429 14:57:09.267322    1543 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-432914?timeout=10s\": dial tcp 192.168.85.2:8443: connect: connection refused" interval="800ms"
	Apr 29 14:57:14 pause-432914 kubelet[1543]: E0429 14:57:14.516476    1543 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Apr 29 14:57:14 pause-432914 kubelet[1543]: E0429 14:57:14.517193    1543 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Apr 29 14:57:18 pause-432914 kubelet[1543]: W0429 14:57:18.026192    1543 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Apr 29 14:57:18 pause-432914 kubelet[1543]: W0429 14:57:18.028360    1543 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Apr 29 14:57:25 pause-432914 kubelet[1543]: I0429 14:57:25.782001    1543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-74wxc" podStartSLOduration=32.781982387 podStartE2EDuration="32.781982387s" podCreationTimestamp="2024-04-29 14:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 14:56:58.058467658 +0000 UTC m=+20.322146589" watchObservedRunningTime="2024-04-29 14:57:25.781982387 +0000 UTC m=+48.045661319"
	Apr 29 14:57:28 pause-432914 kubelet[1543]: W0429 14:57:28.039804    1543 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 14:57:28.956937 2068467 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18771-1897267/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-432914 -n pause-432914
helpers_test.go:261: (dbg) Run:  kubectl --context pause-432914 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-432914
helpers_test.go:235: (dbg) docker inspect pause-432914:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b701e3e8cc37c4a5e8511124c6ebdcaff59f8e6831023426096c99fc754df183",
	        "Created": "2024-04-29T14:56:11.415542922Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2062551,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-29T14:56:11.729215487Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c9315e0f61546d7b9630cf89252fa7f614fc966830e816cca5333df5c944376f",
	        "ResolvConfPath": "/var/lib/docker/containers/b701e3e8cc37c4a5e8511124c6ebdcaff59f8e6831023426096c99fc754df183/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b701e3e8cc37c4a5e8511124c6ebdcaff59f8e6831023426096c99fc754df183/hostname",
	        "HostsPath": "/var/lib/docker/containers/b701e3e8cc37c4a5e8511124c6ebdcaff59f8e6831023426096c99fc754df183/hosts",
	        "LogPath": "/var/lib/docker/containers/b701e3e8cc37c4a5e8511124c6ebdcaff59f8e6831023426096c99fc754df183/b701e3e8cc37c4a5e8511124c6ebdcaff59f8e6831023426096c99fc754df183-json.log",
	        "Name": "/pause-432914",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-432914:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-432914",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/66877a8a0e8c4bd0ab4eb515ad74d2b5ec12575808b65d2e62f7c641be78db98-init/diff:/var/lib/docker/overlay2/f080d6ed1efba2dbfce916f4260b407bf4d9204079d2708eb1c14f6847e489ec/diff",
	                "MergedDir": "/var/lib/docker/overlay2/66877a8a0e8c4bd0ab4eb515ad74d2b5ec12575808b65d2e62f7c641be78db98/merged",
	                "UpperDir": "/var/lib/docker/overlay2/66877a8a0e8c4bd0ab4eb515ad74d2b5ec12575808b65d2e62f7c641be78db98/diff",
	                "WorkDir": "/var/lib/docker/overlay2/66877a8a0e8c4bd0ab4eb515ad74d2b5ec12575808b65d2e62f7c641be78db98/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-432914",
	                "Source": "/var/lib/docker/volumes/pause-432914/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-432914",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-432914",
	                "name.minikube.sigs.k8s.io": "pause-432914",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e1de272df34ae3819533bc8524faad8b3ec823b59f4fa38080b7303e68c24856",
	            "SandboxKey": "/var/run/docker/netns/e1de272df34a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35297"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35296"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35293"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35295"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35294"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-432914": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "f3739010c6c2646bef56a873d01c39781ab34d562bf11750c83e972daedc8a30",
	                    "EndpointID": "7894bf8107ffadec6a88ccda14e5fd45e9ee894be6bd4793f135b8e2c613a305",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "pause-432914",
	                        "b701e3e8cc37"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-432914 -n pause-432914
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-432914 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-432914 logs -n 25: (2.148815088s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-991714         | NoKubernetes-991714       | jenkins | v1.33.0 | 29 Apr 24 14:50 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-991714         | NoKubernetes-991714       | jenkins | v1.33.0 | 29 Apr 24 14:50 UTC | 29 Apr 24 14:51 UTC |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-828310      | minikube                  | jenkins | v1.26.0 | 29 Apr 24 14:50 UTC | 29 Apr 24 14:52 UTC |
	|         | --memory=2200 --driver=docker  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-991714         | NoKubernetes-991714       | jenkins | v1.33.0 | 29 Apr 24 14:51 UTC | 29 Apr 24 14:51 UTC |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-991714         | NoKubernetes-991714       | jenkins | v1.33.0 | 29 Apr 24 14:51 UTC | 29 Apr 24 14:51 UTC |
	| start   | -p NoKubernetes-991714         | NoKubernetes-991714       | jenkins | v1.33.0 | 29 Apr 24 14:51 UTC | 29 Apr 24 14:51 UTC |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-991714 sudo    | NoKubernetes-991714       | jenkins | v1.33.0 | 29 Apr 24 14:51 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-991714         | NoKubernetes-991714       | jenkins | v1.33.0 | 29 Apr 24 14:51 UTC | 29 Apr 24 14:51 UTC |
	| start   | -p NoKubernetes-991714         | NoKubernetes-991714       | jenkins | v1.33.0 | 29 Apr 24 14:51 UTC | 29 Apr 24 14:51 UTC |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-991714 sudo    | NoKubernetes-991714       | jenkins | v1.33.0 | 29 Apr 24 14:51 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-991714         | NoKubernetes-991714       | jenkins | v1.33.0 | 29 Apr 24 14:51 UTC | 29 Apr 24 14:51 UTC |
	| start   | -p kubernetes-upgrade-960980   | kubernetes-upgrade-960980 | jenkins | v1.33.0 | 29 Apr 24 14:51 UTC | 29 Apr 24 14:53 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-828310      | missing-upgrade-828310    | jenkins | v1.33.0 | 29 Apr 24 14:52 UTC | 29 Apr 24 14:53 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-960980   | kubernetes-upgrade-960980 | jenkins | v1.33.0 | 29 Apr 24 14:53 UTC | 29 Apr 24 14:53 UTC |
	| start   | -p kubernetes-upgrade-960980   | kubernetes-upgrade-960980 | jenkins | v1.33.0 | 29 Apr 24 14:53 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-828310      | missing-upgrade-828310    | jenkins | v1.33.0 | 29 Apr 24 14:53 UTC | 29 Apr 24 14:53 UTC |
	| start   | -p stopped-upgrade-518259      | minikube                  | jenkins | v1.26.0 | 29 Apr 24 14:53 UTC | 29 Apr 24 14:54 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --vm-driver=docker             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-518259 stop    | minikube                  | jenkins | v1.26.0 | 29 Apr 24 14:54 UTC | 29 Apr 24 14:54 UTC |
	| start   | -p stopped-upgrade-518259      | stopped-upgrade-518259    | jenkins | v1.33.0 | 29 Apr 24 14:54 UTC | 29 Apr 24 14:54 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-518259      | stopped-upgrade-518259    | jenkins | v1.33.0 | 29 Apr 24 14:54 UTC | 29 Apr 24 14:54 UTC |
	| start   | -p running-upgrade-195173      | minikube                  | jenkins | v1.26.0 | 29 Apr 24 14:54 UTC | 29 Apr 24 14:55 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --vm-driver=docker             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-195173      | running-upgrade-195173    | jenkins | v1.33.0 | 29 Apr 24 14:55 UTC | 29 Apr 24 14:56 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-195173      | running-upgrade-195173    | jenkins | v1.33.0 | 29 Apr 24 14:56 UTC | 29 Apr 24 14:56 UTC |
	| start   | -p pause-432914 --memory=2048  | pause-432914              | jenkins | v1.33.0 | 29 Apr 24 14:56 UTC | 29 Apr 24 14:56 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-432914                | pause-432914              | jenkins | v1.33.0 | 29 Apr 24 14:56 UTC | 29 Apr 24 14:57 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 14:56:59
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 14:56:59.856941 2066013 out.go:291] Setting OutFile to fd 1 ...
	I0429 14:56:59.857109 2066013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:56:59.857118 2066013 out.go:304] Setting ErrFile to fd 2...
	I0429 14:56:59.857123 2066013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:56:59.857373 2066013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-1897267/.minikube/bin
	I0429 14:56:59.857746 2066013 out.go:298] Setting JSON to false
	I0429 14:56:59.858751 2066013 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":38364,"bootTime":1714364256,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0429 14:56:59.858833 2066013 start.go:139] virtualization:  
	I0429 14:56:59.863461 2066013 out.go:177] * [pause-432914] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0429 14:56:59.866098 2066013 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 14:56:59.866138 2066013 notify.go:220] Checking for updates...
	I0429 14:56:59.868797 2066013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 14:56:59.870876 2066013 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18771-1897267/kubeconfig
	I0429 14:56:59.873103 2066013 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-1897267/.minikube
	I0429 14:56:59.875290 2066013 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0429 14:56:59.877785 2066013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 14:56:59.880268 2066013 config.go:182] Loaded profile config "pause-432914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:56:59.880950 2066013 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 14:56:59.903091 2066013 docker.go:122] docker version: linux-26.1.0:Docker Engine - Community
	I0429 14:56:59.903212 2066013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:56:59.967278 2066013 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:62 SystemTime:2024-04-29 14:56:59.958094854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:56:59.967378 2066013 docker.go:295] overlay module found
	I0429 14:56:59.969951 2066013 out.go:177] * Using the docker driver based on existing profile
	I0429 14:56:59.971975 2066013 start.go:297] selected driver: docker
	I0429 14:56:59.972003 2066013 start.go:901] validating driver "docker" against &{Name:pause-432914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-432914 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alias
es:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 14:56:59.972120 2066013 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 14:56:59.972213 2066013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:57:00.111129 2066013 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:62 SystemTime:2024-04-29 14:57:00.070054459 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:57:00.111792 2066013 cni.go:84] Creating CNI manager for ""
	I0429 14:57:00.111815 2066013 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 14:57:00.111906 2066013 start.go:340] cluster config:
	{Name:pause-432914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-432914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-glus
ter:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 14:57:00.115754 2066013 out.go:177] * Starting "pause-432914" primary control-plane node in "pause-432914" cluster
	I0429 14:57:00.119077 2066013 cache.go:121] Beginning downloading kic base image for docker with crio
	I0429 14:57:00.123422 2066013 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0429 14:57:00.126605 2066013 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 14:57:00.126666 2066013 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 14:57:00.127845 2066013 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4
	I0429 14:57:00.128139 2066013 cache.go:56] Caching tarball of preloaded images
	I0429 14:57:00.128262 2066013 preload.go:173] Found /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0429 14:57:00.128275 2066013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 14:57:00.128422 2066013 profile.go:143] Saving config to /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/config.json ...
	I0429 14:57:00.172818 2066013 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0429 14:57:00.172860 2066013 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0429 14:57:00.172895 2066013 cache.go:194] Successfully downloaded all kic artifacts
	I0429 14:57:00.172936 2066013 start.go:360] acquireMachinesLock for pause-432914: {Name:mk60e4243217024a35490e9d845b2c689d9870db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 14:57:00.173042 2066013 start.go:364] duration metric: took 74.363µs to acquireMachinesLock for "pause-432914"
	I0429 14:57:00.173071 2066013 start.go:96] Skipping create...Using existing machine configuration
	I0429 14:57:00.173089 2066013 fix.go:54] fixHost starting: 
	I0429 14:57:00.173418 2066013 cli_runner.go:164] Run: docker container inspect pause-432914 --format={{.State.Status}}
	I0429 14:57:00.204183 2066013 fix.go:112] recreateIfNeeded on pause-432914: state=Running err=<nil>
	W0429 14:57:00.204246 2066013 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 14:57:00.207908 2066013 out.go:177] * Updating the running docker "pause-432914" container ...
	I0429 14:56:58.153863 2048482 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0429 14:56:58.154274 2048482 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0429 14:56:58.154321 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 14:56:58.154410 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 14:56:58.202882 2048482 cri.go:89] found id: "c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77"
	I0429 14:56:58.202903 2048482 cri.go:89] found id: ""
	I0429 14:56:58.202911 2048482 logs.go:276] 1 containers: [c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77]
	I0429 14:56:58.202967 2048482 ssh_runner.go:195] Run: which crictl
	I0429 14:56:58.206609 2048482 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 14:56:58.206688 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 14:56:58.247456 2048482 cri.go:89] found id: ""
	I0429 14:56:58.247479 2048482 logs.go:276] 0 containers: []
	W0429 14:56:58.247488 2048482 logs.go:278] No container was found matching "etcd"
	I0429 14:56:58.247495 2048482 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 14:56:58.247557 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 14:56:58.284967 2048482 cri.go:89] found id: ""
	I0429 14:56:58.284996 2048482 logs.go:276] 0 containers: []
	W0429 14:56:58.285006 2048482 logs.go:278] No container was found matching "coredns"
	I0429 14:56:58.285013 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 14:56:58.285073 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 14:56:58.326027 2048482 cri.go:89] found id: "84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8"
	I0429 14:56:58.326048 2048482 cri.go:89] found id: ""
	I0429 14:56:58.326057 2048482 logs.go:276] 1 containers: [84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8]
	I0429 14:56:58.326111 2048482 ssh_runner.go:195] Run: which crictl
	I0429 14:56:58.329710 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 14:56:58.329777 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 14:56:58.374241 2048482 cri.go:89] found id: ""
	I0429 14:56:58.374264 2048482 logs.go:276] 0 containers: []
	W0429 14:56:58.374273 2048482 logs.go:278] No container was found matching "kube-proxy"
	I0429 14:56:58.374280 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 14:56:58.374338 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 14:56:58.419236 2048482 cri.go:89] found id: "8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539"
	I0429 14:56:58.419260 2048482 cri.go:89] found id: ""
	I0429 14:56:58.419268 2048482 logs.go:276] 1 containers: [8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539]
	I0429 14:56:58.419326 2048482 ssh_runner.go:195] Run: which crictl
	I0429 14:56:58.423190 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 14:56:58.423262 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 14:56:58.463178 2048482 cri.go:89] found id: ""
	I0429 14:56:58.463200 2048482 logs.go:276] 0 containers: []
	W0429 14:56:58.463211 2048482 logs.go:278] No container was found matching "kindnet"
	I0429 14:56:58.463247 2048482 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 14:56:58.463328 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 14:56:58.501245 2048482 cri.go:89] found id: ""
	I0429 14:56:58.501268 2048482 logs.go:276] 0 containers: []
	W0429 14:56:58.501277 2048482 logs.go:278] No container was found matching "storage-provisioner"
	I0429 14:56:58.501287 2048482 logs.go:123] Gathering logs for kube-apiserver [c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77] ...
	I0429 14:56:58.501299 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77"
	I0429 14:56:58.551684 2048482 logs.go:123] Gathering logs for kube-scheduler [84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8] ...
	I0429 14:56:58.551712 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8"
	I0429 14:56:58.646814 2048482 logs.go:123] Gathering logs for kube-controller-manager [8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539] ...
	I0429 14:56:58.646848 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539"
	I0429 14:56:58.687845 2048482 logs.go:123] Gathering logs for CRI-O ...
	I0429 14:56:58.687917 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 14:56:58.736484 2048482 logs.go:123] Gathering logs for container status ...
	I0429 14:56:58.736521 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 14:56:58.783286 2048482 logs.go:123] Gathering logs for kubelet ...
	I0429 14:56:58.783317 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 14:56:58.898265 2048482 logs.go:123] Gathering logs for dmesg ...
	I0429 14:56:58.898299 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 14:56:58.920434 2048482 logs.go:123] Gathering logs for describe nodes ...
	I0429 14:56:58.920508 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 14:56:59.007465 2048482 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 14:57:00.210335 2066013 machine.go:94] provisionDockerMachine start ...
	I0429 14:57:00.210488 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:00.234801 2066013 main.go:141] libmachine: Using SSH client type: native
	I0429 14:57:00.235147 2066013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35297 <nil> <nil>}
	I0429 14:57:00.235168 2066013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 14:57:00.380399 2066013 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-432914
	
	I0429 14:57:00.380429 2066013 ubuntu.go:169] provisioning hostname "pause-432914"
	I0429 14:57:00.380518 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:00.409210 2066013 main.go:141] libmachine: Using SSH client type: native
	I0429 14:57:00.409462 2066013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35297 <nil> <nil>}
	I0429 14:57:00.409488 2066013 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-432914 && echo "pause-432914" | sudo tee /etc/hostname
	I0429 14:57:00.549554 2066013 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-432914
	
	I0429 14:57:00.549707 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:00.567527 2066013 main.go:141] libmachine: Using SSH client type: native
	I0429 14:57:00.567802 2066013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35297 <nil> <nil>}
	I0429 14:57:00.567827 2066013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-432914' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-432914/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-432914' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 14:57:00.692845 2066013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 14:57:00.692871 2066013 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18771-1897267/.minikube CaCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18771-1897267/.minikube}
	I0429 14:57:00.692894 2066013 ubuntu.go:177] setting up certificates
	I0429 14:57:00.692904 2066013 provision.go:84] configureAuth start
	I0429 14:57:00.692964 2066013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-432914
	I0429 14:57:00.709078 2066013 provision.go:143] copyHostCerts
	I0429 14:57:00.709151 2066013 exec_runner.go:144] found /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem, removing ...
	I0429 14:57:00.709165 2066013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem
	I0429 14:57:00.709242 2066013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/key.pem (1679 bytes)
	I0429 14:57:00.709343 2066013 exec_runner.go:144] found /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem, removing ...
	I0429 14:57:00.709356 2066013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem
	I0429 14:57:00.709386 2066013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.pem (1078 bytes)
	I0429 14:57:00.709455 2066013 exec_runner.go:144] found /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem, removing ...
	I0429 14:57:00.709463 2066013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem
	I0429 14:57:00.709487 2066013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18771-1897267/.minikube/cert.pem (1123 bytes)
	I0429 14:57:00.709539 2066013 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem org=jenkins.pause-432914 san=[127.0.0.1 192.168.85.2 localhost minikube pause-432914]
	I0429 14:57:01.057253 2066013 provision.go:177] copyRemoteCerts
	I0429 14:57:01.057323 2066013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 14:57:01.057363 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:01.073781 2066013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35297 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/pause-432914/id_rsa Username:docker}
	I0429 14:57:01.178184 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0429 14:57:01.204914 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0429 14:57:01.230581 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 14:57:01.256104 2066013 provision.go:87] duration metric: took 563.186254ms to configureAuth
	I0429 14:57:01.256130 2066013 ubuntu.go:193] setting minikube options for container-runtime
	I0429 14:57:01.256429 2066013 config.go:182] Loaded profile config "pause-432914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:57:01.256563 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:01.273774 2066013 main.go:141] libmachine: Using SSH client type: native
	I0429 14:57:01.274030 2066013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 35297 <nil> <nil>}
	I0429 14:57:01.274051 2066013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 14:57:01.508572 2048482 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0429 14:57:01.509009 2048482 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0429 14:57:01.509057 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 14:57:01.509122 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 14:57:01.550436 2048482 cri.go:89] found id: "c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77"
	I0429 14:57:01.550458 2048482 cri.go:89] found id: ""
	I0429 14:57:01.550466 2048482 logs.go:276] 1 containers: [c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77]
	I0429 14:57:01.550521 2048482 ssh_runner.go:195] Run: which crictl
	I0429 14:57:01.554166 2048482 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 14:57:01.554238 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 14:57:01.593090 2048482 cri.go:89] found id: ""
	I0429 14:57:01.593113 2048482 logs.go:276] 0 containers: []
	W0429 14:57:01.593122 2048482 logs.go:278] No container was found matching "etcd"
	I0429 14:57:01.593129 2048482 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 14:57:01.593189 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 14:57:01.631043 2048482 cri.go:89] found id: ""
	I0429 14:57:01.631066 2048482 logs.go:276] 0 containers: []
	W0429 14:57:01.631075 2048482 logs.go:278] No container was found matching "coredns"
	I0429 14:57:01.631081 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 14:57:01.631146 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 14:57:01.672037 2048482 cri.go:89] found id: "84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8"
	I0429 14:57:01.672060 2048482 cri.go:89] found id: ""
	I0429 14:57:01.672068 2048482 logs.go:276] 1 containers: [84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8]
	I0429 14:57:01.672123 2048482 ssh_runner.go:195] Run: which crictl
	I0429 14:57:01.675521 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 14:57:01.675588 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 14:57:01.711389 2048482 cri.go:89] found id: ""
	I0429 14:57:01.711412 2048482 logs.go:276] 0 containers: []
	W0429 14:57:01.711421 2048482 logs.go:278] No container was found matching "kube-proxy"
	I0429 14:57:01.711428 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 14:57:01.711486 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 14:57:01.754730 2048482 cri.go:89] found id: "8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539"
	I0429 14:57:01.754751 2048482 cri.go:89] found id: ""
	I0429 14:57:01.754759 2048482 logs.go:276] 1 containers: [8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539]
	I0429 14:57:01.754812 2048482 ssh_runner.go:195] Run: which crictl
	I0429 14:57:01.758277 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 14:57:01.758339 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 14:57:01.795553 2048482 cri.go:89] found id: ""
	I0429 14:57:01.795575 2048482 logs.go:276] 0 containers: []
	W0429 14:57:01.795584 2048482 logs.go:278] No container was found matching "kindnet"
	I0429 14:57:01.795591 2048482 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 14:57:01.795655 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 14:57:01.832198 2048482 cri.go:89] found id: ""
	I0429 14:57:01.832220 2048482 logs.go:276] 0 containers: []
	W0429 14:57:01.832229 2048482 logs.go:278] No container was found matching "storage-provisioner"
	I0429 14:57:01.832238 2048482 logs.go:123] Gathering logs for kubelet ...
	I0429 14:57:01.832249 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 14:57:01.943410 2048482 logs.go:123] Gathering logs for dmesg ...
	I0429 14:57:01.943450 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 14:57:01.962857 2048482 logs.go:123] Gathering logs for describe nodes ...
	I0429 14:57:01.962887 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 14:57:02.035863 2048482 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 14:57:02.035883 2048482 logs.go:123] Gathering logs for kube-apiserver [c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77] ...
	I0429 14:57:02.035896 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77"
	I0429 14:57:02.078597 2048482 logs.go:123] Gathering logs for kube-scheduler [84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8] ...
	I0429 14:57:02.078625 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8"
	I0429 14:57:02.172984 2048482 logs.go:123] Gathering logs for kube-controller-manager [8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539] ...
	I0429 14:57:02.173024 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539"
	I0429 14:57:02.219647 2048482 logs.go:123] Gathering logs for CRI-O ...
	I0429 14:57:02.219672 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 14:57:02.268538 2048482 logs.go:123] Gathering logs for container status ...
	I0429 14:57:02.268572 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 14:57:04.817441 2048482 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0429 14:57:04.817892 2048482 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0429 14:57:04.817939 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 14:57:04.817998 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 14:57:04.856000 2048482 cri.go:89] found id: "c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77"
	I0429 14:57:04.856020 2048482 cri.go:89] found id: ""
	I0429 14:57:04.856028 2048482 logs.go:276] 1 containers: [c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77]
	I0429 14:57:04.856087 2048482 ssh_runner.go:195] Run: which crictl
	I0429 14:57:04.859823 2048482 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 14:57:04.859888 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 14:57:04.896397 2048482 cri.go:89] found id: ""
	I0429 14:57:04.896420 2048482 logs.go:276] 0 containers: []
	W0429 14:57:04.896429 2048482 logs.go:278] No container was found matching "etcd"
	I0429 14:57:04.896435 2048482 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 14:57:04.896494 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 14:57:04.931605 2048482 cri.go:89] found id: ""
	I0429 14:57:04.931629 2048482 logs.go:276] 0 containers: []
	W0429 14:57:04.931638 2048482 logs.go:278] No container was found matching "coredns"
	I0429 14:57:04.931645 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 14:57:04.931702 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 14:57:04.967807 2048482 cri.go:89] found id: "84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8"
	I0429 14:57:04.967828 2048482 cri.go:89] found id: ""
	I0429 14:57:04.967836 2048482 logs.go:276] 1 containers: [84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8]
	I0429 14:57:04.967890 2048482 ssh_runner.go:195] Run: which crictl
	I0429 14:57:04.971274 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 14:57:04.971343 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 14:57:05.011056 2048482 cri.go:89] found id: ""
	I0429 14:57:05.011124 2048482 logs.go:276] 0 containers: []
	W0429 14:57:05.011142 2048482 logs.go:278] No container was found matching "kube-proxy"
	I0429 14:57:05.011150 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 14:57:05.011212 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 14:57:05.049029 2048482 cri.go:89] found id: "8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539"
	I0429 14:57:05.049049 2048482 cri.go:89] found id: ""
	I0429 14:57:05.049057 2048482 logs.go:276] 1 containers: [8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539]
	I0429 14:57:05.049116 2048482 ssh_runner.go:195] Run: which crictl
	I0429 14:57:05.052601 2048482 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 14:57:05.052752 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 14:57:05.093105 2048482 cri.go:89] found id: ""
	I0429 14:57:05.093132 2048482 logs.go:276] 0 containers: []
	W0429 14:57:05.093142 2048482 logs.go:278] No container was found matching "kindnet"
	I0429 14:57:05.093149 2048482 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 14:57:05.093214 2048482 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 14:57:05.133805 2048482 cri.go:89] found id: ""
	I0429 14:57:05.133830 2048482 logs.go:276] 0 containers: []
	W0429 14:57:05.133840 2048482 logs.go:278] No container was found matching "storage-provisioner"
	I0429 14:57:05.133849 2048482 logs.go:123] Gathering logs for kubelet ...
	I0429 14:57:05.133861 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 14:57:05.245501 2048482 logs.go:123] Gathering logs for dmesg ...
	I0429 14:57:05.245535 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 14:57:05.264490 2048482 logs.go:123] Gathering logs for describe nodes ...
	I0429 14:57:05.264517 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 14:57:05.333638 2048482 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 14:57:05.333656 2048482 logs.go:123] Gathering logs for kube-apiserver [c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77] ...
	I0429 14:57:05.333673 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c31a78b55b983beee2f315eccefd1c2427a220407094542763c46481d6604b77"
	I0429 14:57:05.375656 2048482 logs.go:123] Gathering logs for kube-scheduler [84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8] ...
	I0429 14:57:05.375683 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84e0b0ef57ce53a0f9093e16a33dbe1654a9c043e9c897557255c5dc8aeff6d8"
	I0429 14:57:05.466347 2048482 logs.go:123] Gathering logs for kube-controller-manager [8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539] ...
	I0429 14:57:05.466384 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c387030ef95e0ab6992c8980ad9a1f17bbe4613d039fbeec3d135c5f4618539"
	I0429 14:57:05.507687 2048482 logs.go:123] Gathering logs for CRI-O ...
	I0429 14:57:05.507712 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 14:57:05.553217 2048482 logs.go:123] Gathering logs for container status ...
	I0429 14:57:05.553253 2048482 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 14:57:06.657237 2066013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 14:57:06.657259 2066013 machine.go:97] duration metric: took 6.446894252s to provisionDockerMachine
	I0429 14:57:06.657271 2066013 start.go:293] postStartSetup for "pause-432914" (driver="docker")
	I0429 14:57:06.657283 2066013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 14:57:06.657343 2066013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 14:57:06.657390 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:06.673612 2066013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35297 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/pause-432914/id_rsa Username:docker}
	I0429 14:57:06.769664 2066013 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 14:57:06.772825 2066013 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0429 14:57:06.772860 2066013 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0429 14:57:06.772876 2066013 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0429 14:57:06.772883 2066013 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0429 14:57:06.772895 2066013 filesync.go:126] Scanning /home/jenkins/minikube-integration/18771-1897267/.minikube/addons for local assets ...
	I0429 14:57:06.772950 2066013 filesync.go:126] Scanning /home/jenkins/minikube-integration/18771-1897267/.minikube/files for local assets ...
	I0429 14:57:06.773030 2066013 filesync.go:149] local asset: /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem -> 19026842.pem in /etc/ssl/certs
	I0429 14:57:06.773134 2066013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 14:57:06.781823 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem --> /etc/ssl/certs/19026842.pem (1708 bytes)
	I0429 14:57:06.806194 2066013 start.go:296] duration metric: took 148.908328ms for postStartSetup
	I0429 14:57:06.806300 2066013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 14:57:06.806355 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:06.822552 2066013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35297 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/pause-432914/id_rsa Username:docker}
	I0429 14:57:06.910211 2066013 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0429 14:57:06.915247 2066013 fix.go:56] duration metric: took 6.742161306s for fixHost
	I0429 14:57:06.915272 2066013 start.go:83] releasing machines lock for "pause-432914", held for 6.74221688s
	I0429 14:57:06.915366 2066013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-432914
	I0429 14:57:06.931459 2066013 ssh_runner.go:195] Run: cat /version.json
	I0429 14:57:06.931518 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:06.931777 2066013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 14:57:06.931829 2066013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-432914
	I0429 14:57:06.949712 2066013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35297 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/pause-432914/id_rsa Username:docker}
	I0429 14:57:06.952361 2066013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35297 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/pause-432914/id_rsa Username:docker}
	I0429 14:57:07.036591 2066013 ssh_runner.go:195] Run: systemctl --version
	I0429 14:57:07.152061 2066013 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 14:57:07.295531 2066013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 14:57:07.300168 2066013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 14:57:07.309257 2066013 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0429 14:57:07.309387 2066013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 14:57:07.318689 2066013 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 14:57:07.318713 2066013 start.go:494] detecting cgroup driver to use...
	I0429 14:57:07.318747 2066013 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0429 14:57:07.318805 2066013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 14:57:07.331792 2066013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 14:57:07.343855 2066013 docker.go:217] disabling cri-docker service (if available) ...
	I0429 14:57:07.343920 2066013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 14:57:07.357152 2066013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 14:57:07.370524 2066013 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 14:57:07.499416 2066013 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 14:57:07.621891 2066013 docker.go:233] disabling docker service ...
	I0429 14:57:07.621964 2066013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 14:57:07.635580 2066013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 14:57:07.647851 2066013 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 14:57:07.762420 2066013 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 14:57:07.888941 2066013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 14:57:07.900859 2066013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 14:57:07.917641 2066013 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 14:57:07.917706 2066013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.927195 2066013 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 14:57:07.927265 2066013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.936976 2066013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.947659 2066013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.957552 2066013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 14:57:07.967046 2066013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.977338 2066013 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.986955 2066013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 14:57:07.998835 2066013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 14:57:08.011130 2066013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 14:57:08.021093 2066013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 14:57:08.162398 2066013 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 14:57:08.346921 2066013 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 14:57:08.346990 2066013 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 14:57:08.353250 2066013 start.go:562] Will wait 60s for crictl version
	I0429 14:57:08.353312 2066013 ssh_runner.go:195] Run: which crictl
	I0429 14:57:08.357948 2066013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 14:57:08.435034 2066013 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0429 14:57:08.435117 2066013 ssh_runner.go:195] Run: crio --version
	I0429 14:57:08.476287 2066013 ssh_runner.go:195] Run: crio --version
	I0429 14:57:08.553392 2066013 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.24.6 ...
	I0429 14:57:08.556106 2066013 cli_runner.go:164] Run: docker network inspect pause-432914 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0429 14:57:08.578343 2066013 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0429 14:57:08.582508 2066013 kubeadm.go:877] updating cluster {Name:pause-432914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-432914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry
-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 14:57:08.582660 2066013 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 14:57:08.582714 2066013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 14:57:08.643260 2066013 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 14:57:08.643283 2066013 crio.go:433] Images already preloaded, skipping extraction
	I0429 14:57:08.643337 2066013 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 14:57:08.703568 2066013 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 14:57:08.703595 2066013 cache_images.go:84] Images are preloaded, skipping loading
	I0429 14:57:08.703604 2066013 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.30.0 crio true true} ...
	I0429 14:57:08.703711 2066013 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-432914 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:pause-432914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 14:57:08.703803 2066013 ssh_runner.go:195] Run: crio config
	I0429 14:57:08.792543 2066013 cni.go:84] Creating CNI manager for ""
	I0429 14:57:08.792562 2066013 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 14:57:08.792578 2066013 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 14:57:08.792600 2066013 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-432914 NodeName:pause-432914 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 14:57:08.792773 2066013 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-432914"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 14:57:08.792850 2066013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 14:57:08.802178 2066013 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 14:57:08.802241 2066013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 14:57:08.810666 2066013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0429 14:57:08.830217 2066013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 14:57:08.849023 2066013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0429 14:57:08.875753 2066013 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0429 14:57:08.880752 2066013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 14:57:09.128182 2066013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 14:57:09.230635 2066013 certs.go:68] Setting up /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914 for IP: 192.168.85.2
	I0429 14:57:09.230654 2066013 certs.go:194] generating shared ca certs ...
	I0429 14:57:09.230682 2066013 certs.go:226] acquiring lock for ca certs: {Name:mk012c6865f9f1625b7bfd5d0280b6707793520e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 14:57:09.230838 2066013 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key
	I0429 14:57:09.230884 2066013 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key
	I0429 14:57:09.230891 2066013 certs.go:256] generating profile certs ...
	I0429 14:57:09.230977 2066013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/client.key
	I0429 14:57:09.231037 2066013 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/apiserver.key.01cd6b34
	I0429 14:57:09.231074 2066013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/proxy-client.key
	I0429 14:57:09.231175 2066013 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684.pem (1338 bytes)
	W0429 14:57:09.231200 2066013 certs.go:480] ignoring /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684_empty.pem, impossibly tiny 0 bytes
	I0429 14:57:09.231208 2066013 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 14:57:09.231236 2066013 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/ca.pem (1078 bytes)
	I0429 14:57:09.231258 2066013 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/cert.pem (1123 bytes)
	I0429 14:57:09.231284 2066013 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/key.pem (1679 bytes)
	I0429 14:57:09.231326 2066013 certs.go:484] found cert: /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem (1708 bytes)
	I0429 14:57:09.231935 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 14:57:09.340414 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 14:57:09.398138 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 14:57:09.497027 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 14:57:09.592245 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0429 14:57:09.634082 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 14:57:09.682931 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 14:57:09.730044 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/pause-432914/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 14:57:09.777792 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 14:57:09.822642 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/certs/1902684.pem --> /usr/share/ca-certificates/1902684.pem (1338 bytes)
	I0429 14:57:09.875788 2066013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/ssl/certs/19026842.pem --> /usr/share/ca-certificates/19026842.pem (1708 bytes)
	I0429 14:57:09.925488 2066013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 14:57:09.953926 2066013 ssh_runner.go:195] Run: openssl version
	I0429 14:57:09.969086 2066013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 14:57:09.989799 2066013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:57:10.004602 2066013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 14:07 /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:57:10.004874 2066013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 14:57:10.021257 2066013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 14:57:10.048197 2066013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1902684.pem && ln -fs /usr/share/ca-certificates/1902684.pem /etc/ssl/certs/1902684.pem"
	I0429 14:57:10.064364 2066013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1902684.pem
	I0429 14:57:10.068571 2066013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 14:18 /usr/share/ca-certificates/1902684.pem
	I0429 14:57:10.068736 2066013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1902684.pem
	I0429 14:57:10.076289 2066013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1902684.pem /etc/ssl/certs/51391683.0"
	I0429 14:57:10.109542 2066013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19026842.pem && ln -fs /usr/share/ca-certificates/19026842.pem /etc/ssl/certs/19026842.pem"
	I0429 14:57:10.135614 2066013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19026842.pem
	I0429 14:57:10.147807 2066013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 14:18 /usr/share/ca-certificates/19026842.pem
	I0429 14:57:10.147922 2066013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19026842.pem
	I0429 14:57:10.163407 2066013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19026842.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 14:57:10.197650 2066013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 14:57:10.207691 2066013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 14:57:10.221662 2066013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 14:57:10.238407 2066013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 14:57:10.254520 2066013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 14:57:10.267199 2066013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 14:57:10.282949 2066013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 14:57:10.295017 2066013 kubeadm.go:391] StartCluster: {Name:pause-432914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-432914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-cr
eds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 14:57:10.295193 2066013 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 14:57:10.295301 2066013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 14:57:10.351014 2066013 cri.go:89] found id: "a345b76233ad4e66cdff7662b867c232d392ed9c5f83d138ad1ebe3237aabca5"
	I0429 14:57:10.351088 2066013 cri.go:89] found id: "bc0f928fe0a658fbe2067b9f43871766f66ec9782c3e5acc32b91270bd624674"
	I0429 14:57:10.351107 2066013 cri.go:89] found id: "1cf12f4f214ef08f0e8c3f0dcb29a53ef72f4992a5dbe6a3df52d0d3751eda67"
	I0429 14:57:10.351122 2066013 cri.go:89] found id: "75470843e8018da0a5e303803151792aa9c26806ed406f16a72c26e9ce798d98"
	I0429 14:57:10.351139 2066013 cri.go:89] found id: "229390072c65722944c9a0fa5cf8515c9bbdeb9d3b6ae5aa6e6fa92958f427cb"
	I0429 14:57:10.351173 2066013 cri.go:89] found id: "cc2a18c92b0145ea5f55e58e03c469864baaf8c8fb578f2f06ca7713c485e56b"
	I0429 14:57:10.351190 2066013 cri.go:89] found id: "e729ee3b8b03da4783e7a9039abe12fa2bf82753e1f70aaf14e2c9a1374a0d71"
	I0429 14:57:10.351208 2066013 cri.go:89] found id: "8b6b1e430c982f19870d1aa416f56a799ba394846813c0552d13c3696f36bf50"
	I0429 14:57:10.351226 2066013 cri.go:89] found id: "0308fcfdc97181969e31209745d6443d2f452bfdd05aceb64ed104406c36e134"
	I0429 14:57:10.351255 2066013 cri.go:89] found id: "35331f4b60b1fc1c99b4f02d3288515822a4de533c83b26ed2aef975beac13e6"
	I0429 14:57:10.351279 2066013 cri.go:89] found id: "960e58a37bfe8df05c93e4c739b85c0523d77bbcadcdac18c09640f06af6c076"
	I0429 14:57:10.351297 2066013 cri.go:89] found id: "d90fc1365e9e01da73afcded913c468886648421a06c72f50e7822767e16769e"
	I0429 14:57:10.351314 2066013 cri.go:89] found id: "69c7e75b4d4f3ef42f8bfa1e90c23b93b7fe66a5cbd4bcead85fb41ed26a968f"
	I0429 14:57:10.351332 2066013 cri.go:89] found id: "133431ffb9b984f5f6320a552799fdcc9af3dc92e0e3b77003ad5820e8d9ba90"
	I0429 14:57:10.351364 2066013 cri.go:89] found id: "234f1bc29ee03822fc891e1eb09cc6c8593ba210feba7bf50c7b6cd9cf576542"
	I0429 14:57:10.351385 2066013 cri.go:89] found id: "a69ea36f1e8a4813b9516c3cd8c741c48a421dbd0fe66fb443e6d657c887230d"
	I0429 14:57:10.351403 2066013 cri.go:89] found id: ""
	I0429 14:57:10.351501 2066013 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.453379839Z" level=info msg="Starting container: bc0f928fe0a658fbe2067b9f43871766f66ec9782c3e5acc32b91270bd624674" id=b1b9e2b4-bf76-4cce-805b-fae463d57be4 name=/runtime.v1.RuntimeService/StartContainer
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.467035158Z" level=info msg="Started container" PID=2787 containerID=75470843e8018da0a5e303803151792aa9c26806ed406f16a72c26e9ce798d98 description=kube-system/coredns-7db6d8ff4d-p5vp6/coredns id=e689b3d4-5e4d-45fe-bd07-ce78ef92f89f name=/runtime.v1.RuntimeService/StartContainer sandboxID=f443483c3aa22c4b8127fb8365af21dee350a4a7527ce7e57746506db5c68099
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.487146486Z" level=info msg="Started container" PID=2724 containerID=cc2a18c92b0145ea5f55e58e03c469864baaf8c8fb578f2f06ca7713c485e56b description=kube-system/kube-apiserver-pause-432914/kube-apiserver id=b1d4af90-942f-456d-a2bd-6d67ec5bd3f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=93c46304ef8cd9a1cc8cadd3efafee4dc9ae4243c1d7f0dc9d012c7ee4e239ab
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.488621664Z" level=info msg="Created container a345b76233ad4e66cdff7662b867c232d392ed9c5f83d138ad1ebe3237aabca5: kube-system/coredns-7db6d8ff4d-74wxc/coredns" id=841e19b7-795c-47e8-a277-4e0ef458f8d3 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.489265773Z" level=info msg="Starting container: a345b76233ad4e66cdff7662b867c232d392ed9c5f83d138ad1ebe3237aabca5" id=6985fea4-9ed4-41c5-b0b5-f6f513f26a8b name=/runtime.v1.RuntimeService/StartContainer
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.507932487Z" level=info msg="Created container 8b6b1e430c982f19870d1aa416f56a799ba394846813c0552d13c3696f36bf50: kube-system/etcd-pause-432914/etcd" id=573c260d-4a53-44aa-ae7e-e43057a86411 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.508600439Z" level=info msg="Starting container: 8b6b1e430c982f19870d1aa416f56a799ba394846813c0552d13c3696f36bf50" id=57bf36f8-c234-4990-9d76-87eb6495865b name=/runtime.v1.RuntimeService/StartContainer
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.513431454Z" level=info msg="Created container 229390072c65722944c9a0fa5cf8515c9bbdeb9d3b6ae5aa6e6fa92958f427cb: kube-system/kube-proxy-5djxx/kube-proxy" id=bc2179fe-52dc-40b2-989d-c8fafa0c134a name=/runtime.v1.RuntimeService/CreateContainer
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.514007238Z" level=info msg="Starting container: 229390072c65722944c9a0fa5cf8515c9bbdeb9d3b6ae5aa6e6fa92958f427cb" id=80c81951-ef62-47fd-87b5-bce672b0c496 name=/runtime.v1.RuntimeService/StartContainer
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.521577303Z" level=info msg="Started container" PID=2784 containerID=bc0f928fe0a658fbe2067b9f43871766f66ec9782c3e5acc32b91270bd624674 description=kube-system/kube-scheduler-pause-432914/kube-scheduler id=b1b9e2b4-bf76-4cce-805b-fae463d57be4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b62f440c2c9645a9e7168d508a4b6ea8ba807b553b7a88ca15a70be4266f0603
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.530456221Z" level=info msg="Started container" PID=2822 containerID=a345b76233ad4e66cdff7662b867c232d392ed9c5f83d138ad1ebe3237aabca5 description=kube-system/coredns-7db6d8ff4d-74wxc/coredns id=6985fea4-9ed4-41c5-b0b5-f6f513f26a8b name=/runtime.v1.RuntimeService/StartContainer sandboxID=bb91810ea9d82192a39cfa75e13178e50cd13772cb96f3655bb8267b023e674c
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.537761155Z" level=info msg="Started container" PID=2726 containerID=229390072c65722944c9a0fa5cf8515c9bbdeb9d3b6ae5aa6e6fa92958f427cb description=kube-system/kube-proxy-5djxx/kube-proxy id=80c81951-ef62-47fd-87b5-bce672b0c496 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5badd92100177ef017e1410375de5f741265a345f120b1bf3ed4c1c9b6413bb5
	Apr 29 14:57:09 pause-432914 crio[2547]: time="2024-04-29 14:57:09.539790374Z" level=info msg="Started container" PID=2694 containerID=8b6b1e430c982f19870d1aa416f56a799ba394846813c0552d13c3696f36bf50 description=kube-system/etcd-pause-432914/etcd id=57bf36f8-c234-4990-9d76-87eb6495865b name=/runtime.v1.RuntimeService/StartContainer sandboxID=dcdd0ca1bdaead53078f16252e630c04903d51b0daf5affcbe645d8c14c8d578
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.624802957Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.641064806Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.641099612Z" level=info msg="Updated default CNI network name to kindnet"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.641123070Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.659043867Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.659077647Z" level=info msg="Updated default CNI network name to kindnet"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.659093606Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.678426024Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.678467444Z" level=info msg="Updated default CNI network name to kindnet"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.678496998Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.702825501Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Apr 29 14:57:14 pause-432914 crio[2547]: time="2024-04-29 14:57:14.702860069Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a345b76233ad4       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   23 seconds ago       Running             coredns                   1                   bb91810ea9d82       coredns-7db6d8ff4d-74wxc
	bc0f928fe0a65       547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a   23 seconds ago       Running             kube-scheduler            1                   b62f440c2c964       kube-scheduler-pause-432914
	1cf12f4f214ef       68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1   23 seconds ago       Running             kube-controller-manager   1                   2f0726093feba       kube-controller-manager-pause-432914
	75470843e8018       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   23 seconds ago       Running             coredns                   1                   f443483c3aa22       coredns-7db6d8ff4d-p5vp6
	229390072c657       cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f   23 seconds ago       Running             kube-proxy                1                   5badd92100177       kube-proxy-5djxx
	cc2a18c92b014       181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb   23 seconds ago       Running             kube-apiserver            1                   93c46304ef8cd       kube-apiserver-pause-432914
	e729ee3b8b03d       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d   23 seconds ago       Running             kindnet-cni               1                   b0429fa604682       kindnet-lw2xg
	8b6b1e430c982       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd   23 seconds ago       Running             etcd                      1                   dcdd0ca1bdaea       etcd-pause-432914
	0308fcfdc9718       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   36 seconds ago       Exited              coredns                   0                   f443483c3aa22       coredns-7db6d8ff4d-p5vp6
	35331f4b60b1f       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   36 seconds ago       Exited              coredns                   0                   bb91810ea9d82       coredns-7db6d8ff4d-74wxc
	960e58a37bfe8       4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d   38 seconds ago       Exited              kindnet-cni               0                   b0429fa604682       kindnet-lw2xg
	d90fc1365e9e0       cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f   38 seconds ago       Exited              kube-proxy                0                   5badd92100177       kube-proxy-5djxx
	69c7e75b4d4f3       181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb   About a minute ago   Exited              kube-apiserver            0                   93c46304ef8cd       kube-apiserver-pause-432914
	133431ffb9b98       547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a   About a minute ago   Exited              kube-scheduler            0                   b62f440c2c964       kube-scheduler-pause-432914
	234f1bc29ee03       68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1   About a minute ago   Exited              kube-controller-manager   0                   2f0726093feba       kube-controller-manager-pause-432914
	a69ea36f1e8a4       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd   About a minute ago   Exited              etcd                      0                   dcdd0ca1bdaea       etcd-pause-432914
	
	
	==> coredns [0308fcfdc97181969e31209745d6443d2f452bfdd05aceb64ed104406c36e134] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34030 - 9411 "HINFO IN 4244158090950760647.4628995706976055698. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020159706s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [35331f4b60b1fc1c99b4f02d3288515822a4de533c83b26ed2aef975beac13e6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39358 - 8965 "HINFO IN 5143244625229942013.258205165521647658. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021697087s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [75470843e8018da0a5e303803151792aa9c26806ed406f16a72c26e9ce798d98] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36116 - 2937 "HINFO IN 994152511160688042.639342022059642520. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.013200559s
	
	
	==> coredns [a345b76233ad4e66cdff7662b867c232d392ed9c5f83d138ad1ebe3237aabca5] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39744 - 14407 "HINFO IN 6128929205827398344.4425042968035430364. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02225628s
	
	
	==> describe nodes <==
	Name:               pause-432914
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-432914
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=94a01df0d4b48636d4af3d06a53be687e06c0844
	                    minikube.k8s.io/name=pause-432914
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T14_56_38_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 14:56:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-432914
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 14:57:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 14:56:55 +0000   Mon, 29 Apr 2024 14:56:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 14:56:55 +0000   Mon, 29 Apr 2024 14:56:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 14:56:55 +0000   Mon, 29 Apr 2024 14:56:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 14:56:55 +0000   Mon, 29 Apr 2024 14:56:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-432914
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 ea8106e7a72e4845a4ac0fc4d3c44457
	  System UUID:                780f4676-432b-49c8-bedc-bb93c40e086a
	  Boot ID:                    b8f2360a-0b19-4e04-aa8c-604719eae8f1
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-74wxc                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     40s
	  kube-system                 coredns-7db6d8ff4d-p5vp6                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     40s
	  kube-system                 etcd-pause-432914                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         55s
	  kube-system                 kindnet-lw2xg                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      41s
	  kube-system                 kube-apiserver-pause-432914             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  kube-system                 kube-controller-manager-pause-432914    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  kube-system                 kube-proxy-5djxx                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 kube-scheduler-pause-432914             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             290Mi (3%!)(MISSING)  390Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 39s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)  kubelet          Node pause-432914 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)  kubelet          Node pause-432914 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x8 over 64s)  kubelet          Node pause-432914 status is now: NodeHasSufficientPID
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s                kubelet          Node pause-432914 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s                kubelet          Node pause-432914 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s                kubelet          Node pause-432914 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                node-controller  Node pause-432914 event: Registered Node pause-432914 in Controller
	  Normal  NodeReady                38s                kubelet          Node pause-432914 status is now: NodeReady
	  Normal  RegisteredNode           7s                 node-controller  Node pause-432914 event: Registered Node pause-432914 in Controller
	
	
	==> dmesg <==
	[  +0.001039] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000c85fb157
	[  +0.001211] FS-Cache: N-key=[8] 'ef445c0100000000'
	[  +0.002692] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=00000072 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001189] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=0000000050015021
	[  +0.001188] FS-Cache: O-key=[8] 'ef445c0100000000'
	[  +0.000776] FS-Cache: N-cookie c=00000079 [p=0000006f fl=2 nc=0 na=1]
	[  +0.000979] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=000000008e2f8fe5
	[  +0.001268] FS-Cache: N-key=[8] 'ef445c0100000000'
	[  +3.179364] FS-Cache: Duplicate cookie detected
	[  +0.000801] FS-Cache: O-cookie c=00000070 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001026] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=00000000c9ff6823
	[  +0.001183] FS-Cache: O-key=[8] 'ee445c0100000000'
	[  +0.000813] FS-Cache: N-cookie c=0000007b [p=0000006f fl=2 nc=0 na=1]
	[  +0.000953] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000c85fb157
	[  +0.001094] FS-Cache: N-key=[8] 'ee445c0100000000'
	[  +0.286519] FS-Cache: Duplicate cookie detected
	[  +0.000750] FS-Cache: O-cookie c=00000075 [p=0000006f fl=226 nc=0 na=1]
	[  +0.000964] FS-Cache: O-cookie d=00000000a8cdaa2f{9p.inode} n=00000000adc83d13
	[  +0.001065] FS-Cache: O-key=[8] 'f4445c0100000000'
	[  +0.000785] FS-Cache: N-cookie c=0000007c [p=0000006f fl=2 nc=0 na=1]
	[  +0.000978] FS-Cache: N-cookie d=00000000a8cdaa2f{9p.inode} n=00000000652a63b0
	[  +0.001125] FS-Cache: N-key=[8] 'f4445c0100000000'
	[Apr29 14:55] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	[  +0.628253] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [8b6b1e430c982f19870d1aa416f56a799ba394846813c0552d13c3696f36bf50] <==
	{"level":"info","ts":"2024-04-29T14:57:09.805081Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T14:57:09.805105Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T14:57:09.805312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2024-04-29T14:57:09.805378Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2024-04-29T14:57:09.805485Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T14:57:09.805516Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T14:57:09.819193Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T14:57:09.819407Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T14:57:09.819435Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T14:57:09.819556Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-04-29T14:57:09.81957Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-04-29T14:57:11.476705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T14:57:11.476821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T14:57:11.476874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-04-29T14:57:11.476915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T14:57:11.47695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2024-04-29T14:57:11.476986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2024-04-29T14:57:11.477022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2024-04-29T14:57:11.480938Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-432914 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T14:57:11.48114Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T14:57:11.48141Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T14:57:11.484326Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2024-04-29T14:57:11.493889Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T14:57:11.494231Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T14:57:11.495516Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [a69ea36f1e8a4813b9516c3cd8c741c48a421dbd0fe66fb443e6d657c887230d] <==
	{"level":"info","ts":"2024-04-29T14:56:31.180818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2024-04-29T14:56:31.180825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-04-29T14:56:31.180835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2024-04-29T14:56:31.180843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-04-29T14:56:31.184787Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T14:56:31.187484Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-432914 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T14:56:31.187629Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T14:56:31.187936Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T14:56:31.188627Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T14:56:31.188656Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T14:56:31.190189Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T14:56:31.190318Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T14:56:31.190373Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T14:56:31.191831Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2024-04-29T14:56:31.205666Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T14:57:01.422767Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-29T14:57:01.422898Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-432914","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"warn","ts":"2024-04-29T14:57:01.422976Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T14:57:01.42306Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T14:57:01.469679Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T14:57:01.469737Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-29T14:57:01.469812Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2024-04-29T14:57:01.47284Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-04-29T14:57:01.472981Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-04-29T14:57:01.472997Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-432914","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 14:57:33 up 10:39,  0 users,  load average: 3.39, 2.98, 2.41
	Linux pause-432914 5.15.0-1058-aws #64~20.04.1-Ubuntu SMP Tue Apr 9 11:11:55 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [960e58a37bfe8df05c93e4c739b85c0523d77bbcadcdac18c09640f06af6c076] <==
	I0429 14:56:54.316933       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0429 14:56:54.317007       1 main.go:107] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0429 14:56:54.317112       1 main.go:116] setting mtu 1500 for CNI 
	I0429 14:56:54.317122       1 main.go:146] kindnetd IP family: "ipv4"
	I0429 14:56:54.317131       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0429 14:56:54.715599       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0429 14:56:54.715632       1 main.go:227] handling current node
	
	
	==> kindnet [e729ee3b8b03da4783e7a9039abe12fa2bf82753e1f70aaf14e2c9a1374a0d71] <==
	I0429 14:57:09.497125       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0429 14:57:09.501440       1 main.go:107] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0429 14:57:09.501763       1 main.go:116] setting mtu 1500 for CNI 
	I0429 14:57:09.503653       1 main.go:146] kindnetd IP family: "ipv4"
	I0429 14:57:09.503683       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0429 14:57:09.741390       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0429 14:57:09.741698       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0429 14:57:14.624426       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0429 14:57:14.624538       1 main.go:227] handling current node
	I0429 14:57:24.639315       1 main.go:223] Handling node with IPs: map[192.168.85.2:{}]
	I0429 14:57:24.639342       1 main.go:227] handling current node
	
	
	==> kube-apiserver [69c7e75b4d4f3ef42f8bfa1e90c23b93b7fe66a5cbd4bcead85fb41ed26a968f] <==
	W0429 14:57:01.458329       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458362       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458390       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458422       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458454       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458480       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458513       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458649       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458672       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458693       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458714       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458736       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 14:57:01.458764       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0429 14:57:01.458834       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0429 14:57:01.458959       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0429 14:57:01.458985       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0429 14:57:01.459030       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0429 14:57:01.462460       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0429 14:57:01.459068       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0429 14:57:01.463046       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0429 14:57:01.463178       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0429 14:57:01.463672       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0429 14:57:01.464906       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0429 14:57:01.469943       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0429 14:57:01.470058       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	
	
	==> kube-apiserver [cc2a18c92b0145ea5f55e58e03c469864baaf8c8fb578f2f06ca7713c485e56b] <==
	I0429 14:57:14.311826       1 naming_controller.go:291] Starting NamingConditionController
	I0429 14:57:14.311835       1 establishing_controller.go:76] Starting EstablishingController
	I0429 14:57:14.311845       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0429 14:57:14.311853       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0429 14:57:14.311860       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0429 14:57:14.312129       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0429 14:57:14.519782       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0429 14:57:14.548559       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 14:57:14.548610       1 policy_source.go:224] refreshing policies
	I0429 14:57:14.552384       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 14:57:14.554689       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 14:57:14.621042       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 14:57:14.631549       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 14:57:14.631613       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 14:57:14.631621       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 14:57:14.631635       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 14:57:14.631692       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 14:57:14.636184       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 14:57:14.638030       1 aggregator.go:165] initial CRD sync complete...
	I0429 14:57:14.638142       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 14:57:14.638177       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 14:57:14.638208       1 cache.go:39] Caches are synced for autoregister controller
	I0429 14:57:14.666468       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0429 14:57:14.689374       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0429 14:57:15.311482       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	
	
	==> kube-controller-manager [1cf12f4f214ef08f0e8c3f0dcb29a53ef72f4992a5dbe6a3df52d0d3751eda67] <==
	I0429 14:57:26.839587       1 shared_informer.go:320] Caches are synced for disruption
	I0429 14:57:26.839601       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0429 14:57:26.839610       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0429 14:57:26.840292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="79.204µs"
	I0429 14:57:26.839620       1 shared_informer.go:320] Caches are synced for persistent volume
	I0429 14:57:26.840593       1 shared_informer.go:320] Caches are synced for job
	I0429 14:57:26.845338       1 shared_informer.go:320] Caches are synced for stateful set
	I0429 14:57:26.858459       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0429 14:57:26.859581       1 shared_informer.go:320] Caches are synced for taint
	I0429 14:57:26.859678       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0429 14:57:26.859767       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-432914"
	I0429 14:57:26.859827       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0429 14:57:26.860353       1 shared_informer.go:320] Caches are synced for GC
	I0429 14:57:26.861429       1 shared_informer.go:320] Caches are synced for daemon sets
	I0429 14:57:26.862802       1 shared_informer.go:320] Caches are synced for deployment
	I0429 14:57:26.890402       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0429 14:57:26.892797       1 shared_informer.go:320] Caches are synced for crt configmap
	I0429 14:57:26.949561       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 14:57:26.952853       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0429 14:57:26.958228       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0429 14:57:26.970710       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 14:57:26.987975       1 shared_informer.go:320] Caches are synced for endpoint
	I0429 14:57:27.401749       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 14:57:27.401880       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 14:57:27.407926       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [234f1bc29ee03822fc891e1eb09cc6c8593ba210feba7bf50c7b6cd9cf576542] <==
	I0429 14:56:51.948435       1 shared_informer.go:320] Caches are synced for crt configmap
	I0429 14:56:51.954563       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0429 14:56:51.964208       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="pause-432914" podCIDRs=["10.244.0.0/24"]
	I0429 14:56:51.980763       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 14:56:52.020804       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0429 14:56:52.029041       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 14:56:52.047040       1 shared_informer.go:320] Caches are synced for endpoint
	I0429 14:56:52.124553       1 shared_informer.go:320] Caches are synced for persistent volume
	I0429 14:56:52.551751       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 14:56:52.575867       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 14:56:52.575901       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 14:56:53.054735       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="352.233705ms"
	I0429 14:56:53.098596       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.811289ms"
	I0429 14:56:53.126449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.796798ms"
	I0429 14:56:53.126591       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.383µs"
	I0429 14:56:55.088449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.464µs"
	I0429 14:56:55.126359       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="77.218µs"
	I0429 14:56:56.280593       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.99µs"
	I0429 14:56:56.291726       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.152µs"
	I0429 14:56:56.861019       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 14:56:58.017636       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.308µs"
	I0429 14:56:58.051822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.141736ms"
	I0429 14:56:58.052008       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.968µs"
	I0429 14:56:58.079773       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.489699ms"
	I0429 14:56:58.079962       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.853µs"
	
	
	==> kube-proxy [229390072c65722944c9a0fa5cf8515c9bbdeb9d3b6ae5aa6e6fa92958f427cb] <==
	I0429 14:57:09.911768       1 server_linux.go:69] "Using iptables proxy"
	I0429 14:57:14.644579       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.85.2"]
	I0429 14:57:14.731703       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0429 14:57:14.731836       1 server_linux.go:165] "Using iptables Proxier"
	I0429 14:57:14.735997       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0429 14:57:14.736089       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0429 14:57:14.736146       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 14:57:14.736375       1 server.go:872] "Version info" version="v1.30.0"
	I0429 14:57:14.736616       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 14:57:14.737725       1 config.go:192] "Starting service config controller"
	I0429 14:57:14.737790       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 14:57:14.737870       1 config.go:101] "Starting endpoint slice config controller"
	I0429 14:57:14.737900       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 14:57:14.738422       1 config.go:319] "Starting node config controller"
	I0429 14:57:14.738472       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 14:57:14.840725       1 shared_informer.go:320] Caches are synced for service config
	I0429 14:57:14.841176       1 shared_informer.go:320] Caches are synced for node config
	I0429 14:57:14.841282       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d90fc1365e9e01da73afcded913c468886648421a06c72f50e7822767e16769e] <==
	I0429 14:56:54.259447       1 server_linux.go:69] "Using iptables proxy"
	I0429 14:56:54.281182       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.85.2"]
	I0429 14:56:54.306637       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0429 14:56:54.306689       1 server_linux.go:165] "Using iptables Proxier"
	I0429 14:56:54.308311       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0429 14:56:54.308337       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0429 14:56:54.308388       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 14:56:54.309105       1 server.go:872] "Version info" version="v1.30.0"
	I0429 14:56:54.309233       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 14:56:54.311852       1 config.go:192] "Starting service config controller"
	I0429 14:56:54.311876       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 14:56:54.311982       1 config.go:101] "Starting endpoint slice config controller"
	I0429 14:56:54.311993       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 14:56:54.316855       1 config.go:319] "Starting node config controller"
	I0429 14:56:54.316949       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 14:56:54.412050       1 shared_informer.go:320] Caches are synced for service config
	I0429 14:56:54.412727       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 14:56:54.417943       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [133431ffb9b984f5f6320a552799fdcc9af3dc92e0e3b77003ad5820e8d9ba90] <==
	W0429 14:56:35.489650       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 14:56:35.490508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 14:56:35.501799       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 14:56:35.501853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 14:56:35.501948       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 14:56:35.501968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 14:56:35.502021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 14:56:35.502035       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 14:56:35.502070       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 14:56:35.502244       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 14:56:35.502069       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 14:56:35.502325       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 14:56:35.502410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 14:56:35.502539       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 14:56:35.502451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 14:56:35.502633       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 14:56:35.502504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 14:56:35.502713       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 14:56:36.412006       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 14:56:36.412044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 14:56:36.536941       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 14:56:36.537063       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0429 14:56:38.452005       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 14:57:01.418069       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0429 14:57:01.418102       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bc0f928fe0a658fbe2067b9f43871766f66ec9782c3e5acc32b91270bd624674] <==
	I0429 14:57:11.465413       1 serving.go:380] Generated self-signed cert in-memory
	W0429 14:57:14.537131       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0429 14:57:14.537249       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 14:57:14.537285       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 14:57:14.537335       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 14:57:14.570972       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 14:57:14.571010       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 14:57:14.578800       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 14:57:14.578835       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 14:57:14.579594       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 14:57:14.579659       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 14:57:14.679365       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.053427    1543 status_manager.go:853] "Failed to get status for pod" podUID="27f2ec7fc84ebf646bd9a11699ffe034" pod="kube-system/kube-apiserver-pause-432914" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-432914\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.055349    1543 scope.go:117] "RemoveContainer" containerID="234f1bc29ee03822fc891e1eb09cc6c8593ba210feba7bf50c7b6cd9cf576542"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.056352    1543 status_manager.go:853] "Failed to get status for pod" podUID="ab286b5f-0748-42cd-870f-9d418a303bb1" pod="kube-system/kindnet-lw2xg" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-lw2xg\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.056602    1543 status_manager.go:853] "Failed to get status for pod" podUID="00af55d4-66ab-41c4-912b-3ff241e5cfaf" pod="kube-system/coredns-7db6d8ff4d-p5vp6" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p5vp6\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.056828    1543 status_manager.go:853] "Failed to get status for pod" podUID="d0e8083a-e394-42f9-8d90-d8b339a5093b" pod="kube-system/coredns-7db6d8ff4d-74wxc" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-74wxc\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.057062    1543 status_manager.go:853] "Failed to get status for pod" podUID="58991698cce5216594c83d7edf13102b" pod="kube-system/etcd-pause-432914" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-432914\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.057304    1543 status_manager.go:853] "Failed to get status for pod" podUID="27f2ec7fc84ebf646bd9a11699ffe034" pod="kube-system/kube-apiserver-pause-432914" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-432914\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.057580    1543 status_manager.go:853] "Failed to get status for pod" podUID="b8dc4b04e2d48e66300a95afc3ca2e55" pod="kube-system/kube-controller-manager-pause-432914" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-432914\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.057816    1543 status_manager.go:853] "Failed to get status for pod" podUID="36ae885b-82fa-4dd5-b824-77da146dc101" pod="kube-system/kube-proxy-5djxx" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5djxx\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.063851    1543 scope.go:117] "RemoveContainer" containerID="133431ffb9b984f5f6320a552799fdcc9af3dc92e0e3b77003ad5820e8d9ba90"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.068243    1543 status_manager.go:853] "Failed to get status for pod" podUID="ab286b5f-0748-42cd-870f-9d418a303bb1" pod="kube-system/kindnet-lw2xg" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-lw2xg\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.068651    1543 status_manager.go:853] "Failed to get status for pod" podUID="00af55d4-66ab-41c4-912b-3ff241e5cfaf" pod="kube-system/coredns-7db6d8ff4d-p5vp6" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p5vp6\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.069922    1543 status_manager.go:853] "Failed to get status for pod" podUID="d0e8083a-e394-42f9-8d90-d8b339a5093b" pod="kube-system/coredns-7db6d8ff4d-74wxc" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-74wxc\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.070229    1543 status_manager.go:853] "Failed to get status for pod" podUID="5601acafba42956e33b12392b14c4254" pod="kube-system/kube-scheduler-pause-432914" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-432914\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.070459    1543 status_manager.go:853] "Failed to get status for pod" podUID="58991698cce5216594c83d7edf13102b" pod="kube-system/etcd-pause-432914" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-432914\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.077127    1543 status_manager.go:853] "Failed to get status for pod" podUID="27f2ec7fc84ebf646bd9a11699ffe034" pod="kube-system/kube-apiserver-pause-432914" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-432914\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.077466    1543 status_manager.go:853] "Failed to get status for pod" podUID="b8dc4b04e2d48e66300a95afc3ca2e55" pod="kube-system/kube-controller-manager-pause-432914" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-432914\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: I0429 14:57:09.077701    1543 status_manager.go:853] "Failed to get status for pod" podUID="36ae885b-82fa-4dd5-b824-77da146dc101" pod="kube-system/kube-proxy-5djxx" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5djxx\": dial tcp 192.168.85.2:8443: connect: connection refused"
	Apr 29 14:57:09 pause-432914 kubelet[1543]: E0429 14:57:09.267322    1543 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-432914?timeout=10s\": dial tcp 192.168.85.2:8443: connect: connection refused" interval="800ms"
	Apr 29 14:57:14 pause-432914 kubelet[1543]: E0429 14:57:14.516476    1543 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Apr 29 14:57:14 pause-432914 kubelet[1543]: E0429 14:57:14.517193    1543 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Apr 29 14:57:18 pause-432914 kubelet[1543]: W0429 14:57:18.026192    1543 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Apr 29 14:57:18 pause-432914 kubelet[1543]: W0429 14:57:18.028360    1543 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Apr 29 14:57:25 pause-432914 kubelet[1543]: I0429 14:57:25.782001    1543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-74wxc" podStartSLOduration=32.781982387 podStartE2EDuration="32.781982387s" podCreationTimestamp="2024-04-29 14:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 14:56:58.058467658 +0000 UTC m=+20.322146589" watchObservedRunningTime="2024-04-29 14:57:25.781982387 +0000 UTC m=+48.045661319"
	Apr 29 14:57:28 pause-432914 kubelet[1543]: W0429 14:57:28.039804    1543 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 14:57:32.233282 2069284 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18771-1897267/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-432914 -n pause-432914
helpers_test.go:261: (dbg) Run:  kubectl --context pause-432914 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (35.10s)

                                                
                                    

Test pass (288/321)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.9
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.30.0/json-events 7.98
13 TestDownloadOnly/v1.30.0/preload-exists 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.08
18 TestDownloadOnly/v1.30.0/DeleteAll 0.21
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 216.48
29 TestAddons/parallel/Registry 17.64
31 TestAddons/parallel/InspektorGadget 10.78
35 TestAddons/parallel/CSI 53.06
36 TestAddons/parallel/Headlamp 10.93
37 TestAddons/parallel/CloudSpanner 5.56
38 TestAddons/parallel/LocalPath 53.36
39 TestAddons/parallel/NvidiaDevicePlugin 5.57
40 TestAddons/parallel/Yakd 6.01
43 TestAddons/serial/GCPAuth/Namespaces 0.17
44 TestAddons/StoppedEnableDisable 12.23
45 TestCertOptions 37.05
46 TestCertExpiration 250.53
48 TestForceSystemdFlag 38.78
49 TestForceSystemdEnv 44.7
55 TestErrorSpam/setup 32.75
56 TestErrorSpam/start 0.7
57 TestErrorSpam/status 0.96
58 TestErrorSpam/pause 1.66
59 TestErrorSpam/unpause 1.76
60 TestErrorSpam/stop 1.46
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 52.13
65 TestFunctional/serial/AuditLog 0.01
66 TestFunctional/serial/SoftStart 39.36
67 TestFunctional/serial/KubeContext 0.07
68 TestFunctional/serial/KubectlGetPods 0.1
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.96
72 TestFunctional/serial/CacheCmd/cache/add_local 1.11
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.89
77 TestFunctional/serial/CacheCmd/cache/delete 0.13
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
80 TestFunctional/serial/ExtraConfig 33.78
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.88
83 TestFunctional/serial/LogsFileCmd 1.69
84 TestFunctional/serial/InvalidService 4.82
86 TestFunctional/parallel/ConfigCmd 0.55
87 TestFunctional/parallel/DashboardCmd 16.25
88 TestFunctional/parallel/DryRun 0.43
89 TestFunctional/parallel/InternationalLanguage 0.19
90 TestFunctional/parallel/StatusCmd 1.19
94 TestFunctional/parallel/ServiceCmdConnect 12.71
95 TestFunctional/parallel/AddonsCmd 0.24
96 TestFunctional/parallel/PersistentVolumeClaim 26.06
98 TestFunctional/parallel/SSHCmd 0.68
99 TestFunctional/parallel/CpCmd 2.39
101 TestFunctional/parallel/FileSync 0.38
102 TestFunctional/parallel/CertSync 2.32
106 TestFunctional/parallel/NodeLabels 0.11
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.81
110 TestFunctional/parallel/License 0.39
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.45
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
124 TestFunctional/parallel/ProfileCmd/profile_list 0.4
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
126 TestFunctional/parallel/MountCmd/any-port 8.44
127 TestFunctional/parallel/ServiceCmd/List 0.67
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
130 TestFunctional/parallel/ServiceCmd/Format 0.38
131 TestFunctional/parallel/ServiceCmd/URL 0.37
132 TestFunctional/parallel/MountCmd/specific-port 2.19
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.36
134 TestFunctional/parallel/Version/short 0.07
135 TestFunctional/parallel/Version/components 1.13
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
140 TestFunctional/parallel/ImageCommands/ImageBuild 2.97
141 TestFunctional/parallel/ImageCommands/Setup 2.51
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.68
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.49
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.69
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.88
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.31
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.92
152 TestFunctional/delete_addon-resizer_images 0.08
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.01
158 TestMultiControlPlane/serial/StartCluster 157.31
159 TestMultiControlPlane/serial/DeployApp 7.59
160 TestMultiControlPlane/serial/PingHostFromPods 1.66
161 TestMultiControlPlane/serial/AddWorkerNode 53.19
162 TestMultiControlPlane/serial/NodeLabels 0.11
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.73
164 TestMultiControlPlane/serial/CopyFile 19.15
165 TestMultiControlPlane/serial/StopSecondaryNode 12.73
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
167 TestMultiControlPlane/serial/RestartSecondaryNode 20.21
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.99
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 204.04
170 TestMultiControlPlane/serial/DeleteSecondaryNode 13.08
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
172 TestMultiControlPlane/serial/StopCluster 35.86
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.52
175 TestMultiControlPlane/serial/AddSecondaryNode 61.13
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.78
180 TestJSONOutput/start/Command 75.94
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.74
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.65
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.8
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.24
205 TestKicCustomNetwork/create_custom_network 39.72
206 TestKicCustomNetwork/use_default_bridge_network 34.7
207 TestKicExistingNetwork 33.91
208 TestKicCustomSubnet 33.18
209 TestKicStaticIP 33.18
210 TestMainNoArgs 0.07
211 TestMinikubeProfile 64.86
214 TestMountStart/serial/StartWithMountFirst 6.76
215 TestMountStart/serial/VerifyMountFirst 0.26
216 TestMountStart/serial/StartWithMountSecond 6.3
217 TestMountStart/serial/VerifyMountSecond 0.26
218 TestMountStart/serial/DeleteFirst 1.63
219 TestMountStart/serial/VerifyMountPostDelete 0.27
220 TestMountStart/serial/Stop 1.21
221 TestMountStart/serial/RestartStopped 7.63
222 TestMountStart/serial/VerifyMountPostStop 0.26
225 TestMultiNode/serial/FreshStart2Nodes 121.52
226 TestMultiNode/serial/DeployApp2Nodes 5.87
227 TestMultiNode/serial/PingHostFrom2Pods 1.04
228 TestMultiNode/serial/AddNode 21.89
229 TestMultiNode/serial/MultiNodeLabels 0.09
230 TestMultiNode/serial/ProfileList 0.32
231 TestMultiNode/serial/CopyFile 10.24
232 TestMultiNode/serial/StopNode 2.27
233 TestMultiNode/serial/StartAfterStop 9.64
234 TestMultiNode/serial/RestartKeepsNodes 81.83
235 TestMultiNode/serial/DeleteNode 5.29
236 TestMultiNode/serial/StopMultiNode 23.98
237 TestMultiNode/serial/RestartMultiNode 55.63
238 TestMultiNode/serial/ValidateNameConflict 34.31
243 TestPreload 127.88
245 TestScheduledStopUnix 106.37
248 TestInsufficientStorage 10.97
249 TestRunningBinaryUpgrade 72.6
251 TestKubernetesUpgrade 384.94
252 TestMissingContainerUpgrade 160.12
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
255 TestNoKubernetes/serial/StartWithK8s 37.67
256 TestNoKubernetes/serial/StartWithStopK8s 8.48
257 TestNoKubernetes/serial/Start 9.38
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
259 TestNoKubernetes/serial/ProfileList 1.03
260 TestNoKubernetes/serial/Stop 1.27
261 TestNoKubernetes/serial/StartNoArgs 8.24
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.4
263 TestStoppedBinaryUpgrade/Setup 1.42
264 TestStoppedBinaryUpgrade/Upgrade 76.84
265 TestStoppedBinaryUpgrade/MinikubeLogs 1.43
274 TestPause/serial/Start 54.8
283 TestNetworkPlugins/group/false 5.18
288 TestStartStop/group/old-k8s-version/serial/FirstStart 170.99
289 TestStartStop/group/old-k8s-version/serial/DeployApp 10.95
291 TestStartStop/group/no-preload/serial/FirstStart 65.07
292 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.6
293 TestStartStop/group/old-k8s-version/serial/Stop 14.84
294 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.66
295 TestStartStop/group/old-k8s-version/serial/SecondStart 142.47
296 TestStartStop/group/no-preload/serial/DeployApp 8.39
297 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.6
298 TestStartStop/group/no-preload/serial/Stop 12.69
299 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
300 TestStartStop/group/no-preload/serial/SecondStart 279.38
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.14
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
304 TestStartStop/group/old-k8s-version/serial/Pause 2.95
306 TestStartStop/group/embed-certs/serial/FirstStart 74.84
307 TestStartStop/group/embed-certs/serial/DeployApp 8.33
308 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.07
309 TestStartStop/group/embed-certs/serial/Stop 11.98
310 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
311 TestStartStop/group/embed-certs/serial/SecondStart 288.01
312 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
313 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.11
314 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
315 TestStartStop/group/no-preload/serial/Pause 3.09
317 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 52.84
318 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.37
319 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.1
320 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.97
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
322 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.69
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
326 TestStartStop/group/embed-certs/serial/Pause 2.97
328 TestStartStop/group/newest-cni/serial/FirstStart 47.82
329 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.1
331 TestStartStop/group/newest-cni/serial/Stop 1.3
332 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
333 TestStartStop/group/newest-cni/serial/SecondStart 18.34
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
337 TestStartStop/group/newest-cni/serial/Pause 2.84
338 TestNetworkPlugins/group/auto/Start 77.14
339 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
340 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
341 TestNetworkPlugins/group/auto/KubeletFlags 0.29
342 TestNetworkPlugins/group/auto/NetCatPod 9.28
343 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
344 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.04
345 TestNetworkPlugins/group/auto/DNS 0.26
346 TestNetworkPlugins/group/auto/Localhost 0.25
347 TestNetworkPlugins/group/auto/HairPin 0.19
348 TestNetworkPlugins/group/kindnet/Start 84.89
349 TestNetworkPlugins/group/calico/Start 73.76
350 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
351 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
352 TestNetworkPlugins/group/kindnet/NetCatPod 10.26
353 TestNetworkPlugins/group/calico/ControllerPod 6.01
354 TestNetworkPlugins/group/kindnet/DNS 0.23
355 TestNetworkPlugins/group/kindnet/Localhost 0.15
356 TestNetworkPlugins/group/kindnet/HairPin 0.16
357 TestNetworkPlugins/group/calico/KubeletFlags 0.31
358 TestNetworkPlugins/group/calico/NetCatPod 9.28
359 TestNetworkPlugins/group/calico/DNS 0.29
360 TestNetworkPlugins/group/calico/Localhost 0.25
361 TestNetworkPlugins/group/calico/HairPin 0.24
362 TestNetworkPlugins/group/custom-flannel/Start 77.42
363 TestNetworkPlugins/group/enable-default-cni/Start 90.32
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.27
366 TestNetworkPlugins/group/custom-flannel/DNS 0.2
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
369 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
370 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.32
371 TestNetworkPlugins/group/flannel/Start 58.65
372 TestNetworkPlugins/group/enable-default-cni/DNS 0.35
373 TestNetworkPlugins/group/enable-default-cni/Localhost 0.35
374 TestNetworkPlugins/group/enable-default-cni/HairPin 0.38
375 TestNetworkPlugins/group/bridge/Start 86.4
376 TestNetworkPlugins/group/flannel/ControllerPod 6.01
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
378 TestNetworkPlugins/group/flannel/NetCatPod 10.28
379 TestNetworkPlugins/group/flannel/DNS 0.32
380 TestNetworkPlugins/group/flannel/Localhost 0.26
381 TestNetworkPlugins/group/flannel/HairPin 0.23
382 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
383 TestNetworkPlugins/group/bridge/NetCatPod 10.31
384 TestNetworkPlugins/group/bridge/DNS 0.17
385 TestNetworkPlugins/group/bridge/Localhost 0.15
386 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (8.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-668091 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-668091 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.901577368s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-668091
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-668091: exit status 85 (82.591918ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-668091 | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC |          |
	|         | -p download-only-668091        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 14:06:32
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 14:06:32.517468 1902689 out.go:291] Setting OutFile to fd 1 ...
	I0429 14:06:32.517628 1902689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:06:32.517637 1902689 out.go:304] Setting ErrFile to fd 2...
	I0429 14:06:32.517643 1902689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:06:32.517913 1902689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-1897267/.minikube/bin
	W0429 14:06:32.518049 1902689 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18771-1897267/.minikube/config/config.json: open /home/jenkins/minikube-integration/18771-1897267/.minikube/config/config.json: no such file or directory
	I0429 14:06:32.518452 1902689 out.go:298] Setting JSON to true
	I0429 14:06:32.519391 1902689 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":35337,"bootTime":1714364256,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0429 14:06:32.519472 1902689 start.go:139] virtualization:  
	I0429 14:06:32.523176 1902689 out.go:97] [download-only-668091] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0429 14:06:32.525492 1902689 out.go:169] MINIKUBE_LOCATION=18771
	W0429 14:06:32.523360 1902689 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball: no such file or directory
	I0429 14:06:32.523412 1902689 notify.go:220] Checking for updates...
	I0429 14:06:32.530780 1902689 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 14:06:32.533514 1902689 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18771-1897267/kubeconfig
	I0429 14:06:32.536087 1902689 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-1897267/.minikube
	I0429 14:06:32.538685 1902689 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0429 14:06:32.543629 1902689 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 14:06:32.543931 1902689 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 14:06:32.563966 1902689 docker.go:122] docker version: linux-26.1.0:Docker Engine - Community
	I0429 14:06:32.564075 1902689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:06:32.627316 1902689 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-29 14:06:32.617981652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:06:32.627429 1902689 docker.go:295] overlay module found
	I0429 14:06:32.630075 1902689 out.go:97] Using the docker driver based on user configuration
	I0429 14:06:32.630103 1902689 start.go:297] selected driver: docker
	I0429 14:06:32.630109 1902689 start.go:901] validating driver "docker" against <nil>
	I0429 14:06:32.630209 1902689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:06:32.683486 1902689 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-29 14:06:32.674113456 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:06:32.683651 1902689 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 14:06:32.683947 1902689 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0429 14:06:32.684112 1902689 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 14:06:32.686565 1902689 out.go:169] Using Docker driver with root privileges
	I0429 14:06:32.688608 1902689 cni.go:84] Creating CNI manager for ""
	I0429 14:06:32.688638 1902689 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 14:06:32.688648 1902689 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 14:06:32.688753 1902689 start.go:340] cluster config:
	{Name:download-only-668091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-668091 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 14:06:32.691356 1902689 out.go:97] Starting "download-only-668091" primary control-plane node in "download-only-668091" cluster
	I0429 14:06:32.691374 1902689 cache.go:121] Beginning downloading kic base image for docker with crio
	I0429 14:06:32.693376 1902689 out.go:97] Pulling base image v0.0.43-1713736339-18706 ...
	I0429 14:06:32.693413 1902689 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 14:06:32.693455 1902689 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 14:06:32.707192 1902689 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0429 14:06:32.707375 1902689 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory
	I0429 14:06:32.707474 1902689 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0429 14:06:32.771796 1902689 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0429 14:06:32.771823 1902689 cache.go:56] Caching tarball of preloaded images
	I0429 14:06:32.771996 1902689 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 14:06:32.774238 1902689 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0429 14:06:32.774259 1902689 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0429 14:06:32.885137 1902689 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-668091 host does not exist
	  To start a cluster, run: "minikube start -p download-only-668091"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-668091
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (7.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-605899 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-605899 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.981385839s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (7.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-605899
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-605899: exit status 85 (80.732287ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-668091 | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC |                     |
	|         | -p download-only-668091        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC | 29 Apr 24 14:06 UTC |
	| delete  | -p download-only-668091        | download-only-668091 | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC | 29 Apr 24 14:06 UTC |
	| start   | -o=json --download-only        | download-only-605899 | jenkins | v1.33.0 | 29 Apr 24 14:06 UTC |                     |
	|         | -p download-only-605899        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 14:06:41
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 14:06:41.857274 1902857 out.go:291] Setting OutFile to fd 1 ...
	I0429 14:06:41.857395 1902857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:06:41.857405 1902857 out.go:304] Setting ErrFile to fd 2...
	I0429 14:06:41.857411 1902857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:06:41.857643 1902857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-1897267/.minikube/bin
	I0429 14:06:41.858025 1902857 out.go:298] Setting JSON to true
	I0429 14:06:41.858896 1902857 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":35346,"bootTime":1714364256,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0429 14:06:41.858963 1902857 start.go:139] virtualization:  
	I0429 14:06:41.861665 1902857 out.go:97] [download-only-605899] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0429 14:06:41.863736 1902857 out.go:169] MINIKUBE_LOCATION=18771
	I0429 14:06:41.861860 1902857 notify.go:220] Checking for updates...
	I0429 14:06:41.867492 1902857 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 14:06:41.870370 1902857 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18771-1897267/kubeconfig
	I0429 14:06:41.872492 1902857 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-1897267/.minikube
	I0429 14:06:41.874323 1902857 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0429 14:06:41.877983 1902857 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 14:06:41.878252 1902857 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 14:06:41.898477 1902857 docker.go:122] docker version: linux-26.1.0:Docker Engine - Community
	I0429 14:06:41.898588 1902857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:06:41.963471 1902857 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2024-04-29 14:06:41.953277324 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:06:41.963575 1902857 docker.go:295] overlay module found
	I0429 14:06:41.965839 1902857 out.go:97] Using the docker driver based on user configuration
	I0429 14:06:41.965865 1902857 start.go:297] selected driver: docker
	I0429 14:06:41.965872 1902857 start.go:901] validating driver "docker" against <nil>
	I0429 14:06:41.965973 1902857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:06:42.019574 1902857 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2024-04-29 14:06:42.009003411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:06:42.019758 1902857 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 14:06:42.020113 1902857 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0429 14:06:42.020285 1902857 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 14:06:42.022386 1902857 out.go:169] Using Docker driver with root privileges
	I0429 14:06:42.024231 1902857 cni.go:84] Creating CNI manager for ""
	I0429 14:06:42.024266 1902857 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0429 14:06:42.024278 1902857 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 14:06:42.024374 1902857 start.go:340] cluster config:
	{Name:download-only-605899 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-605899 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 14:06:42.026485 1902857 out.go:97] Starting "download-only-605899" primary control-plane node in "download-only-605899" cluster
	I0429 14:06:42.026530 1902857 cache.go:121] Beginning downloading kic base image for docker with crio
	I0429 14:06:42.028373 1902857 out.go:97] Pulling base image v0.0.43-1713736339-18706 ...
	I0429 14:06:42.028402 1902857 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 14:06:42.028507 1902857 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0429 14:06:42.044893 1902857 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0429 14:06:42.045040 1902857 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory
	I0429 14:06:42.045061 1902857 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory, skipping pull
	I0429 14:06:42.045066 1902857 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in cache, skipping pull
	I0429 14:06:42.045075 1902857 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e as a tarball
	I0429 14:06:42.088222 1902857 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4
	I0429 14:06:42.088249 1902857 cache.go:56] Caching tarball of preloaded images
	I0429 14:06:42.088429 1902857 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 14:06:42.090705 1902857 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0429 14:06:42.090743 1902857 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4 ...
	I0429 14:06:42.211011 1902857 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:0b6b385f66a101b8e819a9a918236667 -> /home/jenkins/minikube-integration/18771-1897267/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-605899 host does not exist
	  To start a cluster, run: "minikube start -p download-only-605899"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-605899
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-349287 --alsologtostderr --binary-mirror http://127.0.0.1:36983 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-349287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-349287
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-457090
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-457090: exit status 85 (84.505606ms)

                                                
                                                
-- stdout --
	* Profile "addons-457090" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-457090"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-457090
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-457090: exit status 85 (84.667722ms)

                                                
                                                
-- stdout --
	* Profile "addons-457090" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-457090"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (216.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-457090 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-457090 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m36.480576936s)
--- PASS: TestAddons/Setup (216.48s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 53.908474ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-zhb4n" [9abf552b-43fc-4cf4-968b-c3f3be943f93] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004696176s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-96wq6" [1b8f503a-0540-4820-bd92-04b584ad56fb] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004345557s
addons_test.go:340: (dbg) Run:  kubectl --context addons-457090 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-457090 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-457090 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.163666467s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-457090 ip
2024/04/29 14:10:44 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-457090 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-linux-arm64 -p addons-457090 addons disable registry --alsologtostderr -v=1: (1.050160524s)
--- PASS: TestAddons/parallel/Registry (17.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xzqdr" [d24e393d-8f57-445e-b3e1-03309c62bab8] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00393043s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-457090
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-457090: (5.774317394s)
--- PASS: TestAddons/parallel/InspektorGadget (10.78s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 52.919672ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-457090 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-457090 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ec042c93-a5d4-44f3-adb2-3b6b1386b7ca] Pending
helpers_test.go:344: "task-pv-pod" [ec042c93-a5d4-44f3-adb2-3b6b1386b7ca] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ec042c93-a5d4-44f3-adb2-3b6b1386b7ca] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.00330698s
addons_test.go:584: (dbg) Run:  kubectl --context addons-457090 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-457090 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-457090 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-457090 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-457090 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-457090 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-457090 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [431d04aa-21fe-4c3d-9010-950e4665da58] Pending
helpers_test.go:344: "task-pv-pod-restore" [431d04aa-21fe-4c3d-9010-950e4665da58] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [431d04aa-21fe-4c3d-9010-950e4665da58] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004240971s
addons_test.go:626: (dbg) Run:  kubectl --context addons-457090 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-457090 delete pod task-pv-pod-restore: (1.19322393s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-457090 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-457090 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-457090 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-457090 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.723593153s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-457090 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.06s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-457090 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-2zx6r" [2ff7ac30-0c53-4d71-82a9-0a5a13617db6] Pending
helpers_test.go:344: "headlamp-7559bf459f-2zx6r" [2ff7ac30-0c53-4d71-82a9-0a5a13617db6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-2zx6r" [2ff7ac30-0c53-4d71-82a9-0a5a13617db6] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003436199s
--- PASS: TestAddons/parallel/Headlamp (10.93s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-8677549d7-9pkxr" [a5e279f7-f520-4154-9d1d-65e2fa449ae7] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003327366s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-457090
--- PASS: TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.36s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-457090 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-457090 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-457090 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a9821dd1-5e1a-41ee-9971-22b698e09ed3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a9821dd1-5e1a-41ee-9971-22b698e09ed3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a9821dd1-5e1a-41ee-9971-22b698e09ed3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004357178s
addons_test.go:891: (dbg) Run:  kubectl --context addons-457090 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-457090 ssh "cat /opt/local-path-provisioner/pvc-d73e47b3-72c4-4752-8811-fa0e3b0dd658_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-457090 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-457090 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-457090 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-457090 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.256691711s)
--- PASS: TestAddons/parallel/LocalPath (53.36s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-b6fbn" [d72d7bb4-220a-44af-9b8f-8b406f53e814] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005266455s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-457090
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-w8n26" [c7f09e5d-6e8a-4a4d-9e4c-75db8239995d] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004403189s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-457090 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-457090 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.23s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-457090
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-457090: (11.941400445s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-457090
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-457090
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-457090
--- PASS: TestAddons/StoppedEnableDisable (12.23s)

                                                
                                    
x
+
TestCertOptions (37.05s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-844207 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0429 14:59:13.100162 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-844207 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.429003278s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-844207 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-844207 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-844207 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-844207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-844207
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-844207: (1.985179801s)
--- PASS: TestCertOptions (37.05s)

                                                
                                    
x
+
TestCertExpiration (250.53s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-855721 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-855721 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (41.20693522s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-855721 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-855721 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (26.94822596s)
helpers_test.go:175: Cleaning up "cert-expiration-855721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-855721
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-855721: (2.378263425s)
--- PASS: TestCertExpiration (250.53s)

                                                
                                    
x
+
TestForceSystemdFlag (38.78s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-277011 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-277011 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.808221228s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-277011 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-277011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-277011
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-277011: (2.605095283s)
--- PASS: TestForceSystemdFlag (38.78s)

                                                
                                    
x
+
TestForceSystemdEnv (44.7s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-125380 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-125380 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.096081065s)
helpers_test.go:175: Cleaning up "force-systemd-env-125380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-125380
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-125380: (2.599344948s)
--- PASS: TestForceSystemdEnv (44.70s)

                                                
                                    
x
+
TestErrorSpam/setup (32.75s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-450771 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-450771 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-450771 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-450771 --driver=docker  --container-runtime=crio: (32.748744705s)
--- PASS: TestErrorSpam/setup (32.75s)

                                                
                                    
x
+
TestErrorSpam/start (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-450771 --log_dir /tmp/nospam-450771 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-450771 --log_dir /tmp/nospam-450771 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-450771 --log_dir /tmp/nospam-450771 start --dry-run
--- PASS: TestErrorSpam/start (0.70s)

                                                
                                    
x
+
TestErrorSpam/status (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-450771 --log_dir /tmp/nospam-450771 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-450771 --log_dir /tmp/nospam-450771 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-450771 --log_dir /tmp/nospam-450771 status
--- PASS: TestErrorSpam/status (0.96s)

                                                
                                    
x
+
TestErrorSpam/pause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-450771 --log_dir /tmp/nospam-450771 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-450771 --log_dir /tmp/nospam-450771 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-450771 --log_dir /tmp/nospam-450771 pause
--- PASS: TestErrorSpam/pause (1.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-450771 --log_dir /tmp/nospam-450771 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-450771 --log_dir /tmp/nospam-450771 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-450771 --log_dir /tmp/nospam-450771 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-450771 --log_dir /tmp/nospam-450771 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-450771 --log_dir /tmp/nospam-450771 stop: (1.24304186s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-450771 --log_dir /tmp/nospam-450771 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-450771 --log_dir /tmp/nospam-450771 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18771-1897267/.minikube/files/etc/test/nested/copy/1902684/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-304104 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-304104 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (52.12528869s)
--- PASS: TestFunctional/serial/StartWithProxy (52.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.01s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.36s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-304104 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-304104 --alsologtostderr -v=8: (39.354376699s)
functional_test.go:659: soft start took 39.35685286s for "functional-304104" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.36s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-304104 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-304104 cache add registry.k8s.io/pause:3.1: (1.205044108s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-304104 cache add registry.k8s.io/pause:3.3: (1.226958019s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-304104 cache add registry.k8s.io/pause:latest: (1.531835149s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-304104 /tmp/TestFunctionalserialCacheCmdcacheadd_local1923619340/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 cache add minikube-local-cache-test:functional-304104
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 cache delete minikube-local-cache-test:functional-304104
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-304104
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-304104 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (293.586048ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 kubectl -- --context functional-304104 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-304104 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.78s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-304104 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0429 14:20:28.163052 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
E0429 14:20:28.169229 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
E0429 14:20:28.179599 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
E0429 14:20:28.199911 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
E0429 14:20:28.240291 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
E0429 14:20:28.320697 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
E0429 14:20:28.481237 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
E0429 14:20:28.801859 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
E0429 14:20:29.442793 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
E0429 14:20:30.723254 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
E0429 14:20:33.283506 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
E0429 14:20:38.404035 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
E0429 14:20:48.644820 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-304104 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.779095839s)
functional_test.go:757: restart took 33.779200757s for "functional-304104" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.78s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-304104 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-304104 logs: (1.877806111s)
--- PASS: TestFunctional/serial/LogsCmd (1.88s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 logs --file /tmp/TestFunctionalserialLogsFileCmd3553320254/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-304104 logs --file /tmp/TestFunctionalserialLogsFileCmd3553320254/001/logs.txt: (1.688821712s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.82s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-304104 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-304104
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-304104: exit status 115 (591.598473ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31841 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-304104 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.82s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-304104 config get cpus: exit status 14 (91.776733ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-304104 config get cpus: exit status 14 (92.520999ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-304104 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-304104 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1929245: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-304104 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-304104 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (188.433026ms)

                                                
                                                
-- stdout --
	* [functional-304104] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18771-1897267/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-1897267/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 14:21:42.752495 1929007 out.go:291] Setting OutFile to fd 1 ...
	I0429 14:21:42.752706 1929007 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:21:42.752718 1929007 out.go:304] Setting ErrFile to fd 2...
	I0429 14:21:42.752724 1929007 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:21:42.752959 1929007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-1897267/.minikube/bin
	I0429 14:21:42.753301 1929007 out.go:298] Setting JSON to false
	I0429 14:21:42.754257 1929007 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":36247,"bootTime":1714364256,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0429 14:21:42.754328 1929007 start.go:139] virtualization:  
	I0429 14:21:42.759122 1929007 out.go:177] * [functional-304104] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0429 14:21:42.761230 1929007 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 14:21:42.761290 1929007 notify.go:220] Checking for updates...
	I0429 14:21:42.766198 1929007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 14:21:42.768334 1929007 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18771-1897267/kubeconfig
	I0429 14:21:42.770465 1929007 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-1897267/.minikube
	I0429 14:21:42.772924 1929007 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0429 14:21:42.774912 1929007 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 14:21:42.777729 1929007 config.go:182] Loaded profile config "functional-304104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:21:42.778269 1929007 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 14:21:42.797745 1929007 docker.go:122] docker version: linux-26.1.0:Docker Engine - Community
	I0429 14:21:42.797865 1929007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:21:42.863383 1929007 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-04-29 14:21:42.853635833 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:21:42.863492 1929007 docker.go:295] overlay module found
	I0429 14:21:42.865744 1929007 out.go:177] * Using the docker driver based on existing profile
	I0429 14:21:42.867906 1929007 start.go:297] selected driver: docker
	I0429 14:21:42.867924 1929007 start.go:901] validating driver "docker" against &{Name:functional-304104 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-304104 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 14:21:42.868055 1929007 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 14:21:42.870846 1929007 out.go:177] 
	W0429 14:21:42.873684 1929007 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0429 14:21:42.875960 1929007 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-304104 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-304104 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-304104 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (193.542567ms)

                                                
                                                
-- stdout --
	* [functional-304104] minikube v1.33.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18771-1897267/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-1897267/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 14:21:42.558725 1928965 out.go:291] Setting OutFile to fd 1 ...
	I0429 14:21:42.558902 1928965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:21:42.558914 1928965 out.go:304] Setting ErrFile to fd 2...
	I0429 14:21:42.558919 1928965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:21:42.559304 1928965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-1897267/.minikube/bin
	I0429 14:21:42.559690 1928965 out.go:298] Setting JSON to false
	I0429 14:21:42.560694 1928965 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":36247,"bootTime":1714364256,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0429 14:21:42.560760 1928965 start.go:139] virtualization:  
	I0429 14:21:42.564147 1928965 out.go:177] * [functional-304104] minikube v1.33.0 sur Ubuntu 20.04 (arm64)
	I0429 14:21:42.567306 1928965 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 14:21:42.569134 1928965 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 14:21:42.567368 1928965 notify.go:220] Checking for updates...
	I0429 14:21:42.573277 1928965 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18771-1897267/kubeconfig
	I0429 14:21:42.575133 1928965 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-1897267/.minikube
	I0429 14:21:42.577200 1928965 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0429 14:21:42.579167 1928965 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 14:21:42.581429 1928965 config.go:182] Loaded profile config "functional-304104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:21:42.581974 1928965 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 14:21:42.601387 1928965 docker.go:122] docker version: linux-26.1.0:Docker Engine - Community
	I0429 14:21:42.601498 1928965 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:21:42.673139 1928965 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-04-29 14:21:42.662724224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:21:42.673240 1928965 docker.go:295] overlay module found
	I0429 14:21:42.677178 1928965 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0429 14:21:42.679331 1928965 start.go:297] selected driver: docker
	I0429 14:21:42.679355 1928965 start.go:901] validating driver "docker" against &{Name:functional-304104 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-304104 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 14:21:42.679471 1928965 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 14:21:42.682490 1928965 out.go:177] 
	W0429 14:21:42.685218 1928965 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0429 14:21:42.687513 1928965 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-304104 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-304104 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-9wqjh" [68621572-f74e-4507-8e25-4478b01aca4f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-9wqjh" [68621572-f74e-4507-8e25-4478b01aca4f] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.003299648s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31411
functional_test.go:1671: http://192.168.49.2:31411: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-9wqjh

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31411
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [924a03c3-17c5-47bd-b01a-fa6a356e8f99] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003971152s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-304104 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-304104 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-304104 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-304104 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [00e4cee2-4102-479b-9d85-f532005da5fc] Pending
helpers_test.go:344: "sp-pod" [00e4cee2-4102-479b-9d85-f532005da5fc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [00e4cee2-4102-479b-9d85-f532005da5fc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00424599s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-304104 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-304104 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-304104 delete -f testdata/storage-provisioner/pod.yaml: (1.130992134s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-304104 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ef070330-5cf9-4b73-857f-fc5e0f4d1842] Pending
helpers_test.go:344: "sp-pod" [ef070330-5cf9-4b73-857f-fc5e0f4d1842] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003883315s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-304104 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "cat /etc/hostname"
E0429 14:21:09.125397 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh -n functional-304104 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 cp functional-304104:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1127408171/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh -n functional-304104 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh -n functional-304104 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1902684/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "sudo cat /etc/test/nested/copy/1902684/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1902684.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "sudo cat /etc/ssl/certs/1902684.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1902684.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "sudo cat /usr/share/ca-certificates/1902684.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/19026842.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "sudo cat /etc/ssl/certs/19026842.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/19026842.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "sudo cat /usr/share/ca-certificates/19026842.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-304104 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-304104 ssh "sudo systemctl is-active docker": exit status 1 (407.877647ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-304104 ssh "sudo systemctl is-active containerd": exit status 1 (402.479527ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-304104 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-304104 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-304104 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-304104 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1926895: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-304104 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-304104 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [91c3dc09-3db6-45bd-a335-1427bda59e76] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [91c3dc09-3db6-45bd-a335-1427bda59e76] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004165476s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-304104 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.150.26 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-304104 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-304104 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-304104 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-sdcld" [d9b8288e-2849-4e53-9c40-d68aa97ebfdb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-sdcld" [d9b8288e-2849-4e53-9c40-d68aa97ebfdb] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.01091863s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "333.855597ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "70.235317ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "327.88449ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "57.489244ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-304104 /tmp/TestFunctionalparallelMountCmdany-port3561459070/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714400497837370329" to /tmp/TestFunctionalparallelMountCmdany-port3561459070/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714400497837370329" to /tmp/TestFunctionalparallelMountCmdany-port3561459070/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714400497837370329" to /tmp/TestFunctionalparallelMountCmdany-port3561459070/001/test-1714400497837370329
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-304104 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (347.021482ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 29 14:21 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 29 14:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 29 14:21 test-1714400497837370329
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh cat /mount-9p/test-1714400497837370329
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-304104 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4e70de73-e83f-40b5-956d-66e1475480a3] Pending
helpers_test.go:344: "busybox-mount" [4e70de73-e83f-40b5-956d-66e1475480a3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4e70de73-e83f-40b5-956d-66e1475480a3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4e70de73-e83f-40b5-956d-66e1475480a3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005040792s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-304104 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-304104 /tmp/TestFunctionalparallelMountCmdany-port3561459070/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 service list -o json
functional_test.go:1490: Took "587.570583ms" to run "out/minikube-linux-arm64 -p functional-304104 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31437
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31437
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-304104 /tmp/TestFunctionalparallelMountCmdspecific-port281006982/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-304104 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (393.184707ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-304104 /tmp/TestFunctionalparallelMountCmdspecific-port281006982/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-304104 ssh "sudo umount -f /mount-9p": exit status 1 (292.552635ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-304104 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-304104 /tmp/TestFunctionalparallelMountCmdspecific-port281006982/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-304104 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2708733351/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-304104 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2708733351/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-304104 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2708733351/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-304104 ssh "findmnt -T" /mount1: exit status 1 (949.962197ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "findmnt -T" /mount2
E0429 14:21:50.085737 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-304104 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-304104 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2708733351/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-304104 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2708733351/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-304104 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2708733351/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-304104 version -o=json --components: (1.129919714s)
--- PASS: TestFunctional/parallel/Version/components (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-304104 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-304104
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-304104 image ls --format short --alsologtostderr:
I0429 14:22:12.225825 1931529 out.go:291] Setting OutFile to fd 1 ...
I0429 14:22:12.226447 1931529 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 14:22:12.226485 1931529 out.go:304] Setting ErrFile to fd 2...
I0429 14:22:12.226505 1931529 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 14:22:12.226765 1931529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-1897267/.minikube/bin
I0429 14:22:12.228406 1931529 config.go:182] Loaded profile config "functional-304104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 14:22:12.228582 1931529 config.go:182] Loaded profile config "functional-304104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 14:22:12.229423 1931529 cli_runner.go:164] Run: docker container inspect functional-304104 --format={{.State.Status}}
I0429 14:22:12.251691 1931529 ssh_runner.go:195] Run: systemctl --version
I0429 14:22:12.251752 1931529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-304104
I0429 14:22:12.271156 1931529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35052 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/functional-304104/id_rsa Username:docker}
I0429 14:22:12.365508 1931529 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-304104 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-proxy              | v1.30.0            | cb7eac0b42cc1 | 89.1MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| gcr.io/google-containers/addon-resizer  | functional-304104  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/kube-scheduler          | v1.30.0            | 547adae34140b | 61.6MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-controller-manager | v1.30.0            | 68feac521c0f1 | 108MB  |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4740c1948d3fc | 60.9MB |
| docker.io/library/nginx                 | alpine             | e664fb1e82890 | 51.5MB |
| docker.io/library/nginx                 | latest             | 786a14303c960 | 197MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 014faa467e297 | 140MB  |
| registry.k8s.io/kube-apiserver          | v1.30.0            | 181f57fd3cdb7 | 114MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-304104 image ls --format table --alsologtostderr:
I0429 14:22:12.844992 1931666 out.go:291] Setting OutFile to fd 1 ...
I0429 14:22:12.845229 1931666 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 14:22:12.845241 1931666 out.go:304] Setting ErrFile to fd 2...
I0429 14:22:12.845246 1931666 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 14:22:12.845518 1931666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-1897267/.minikube/bin
I0429 14:22:12.846170 1931666 config.go:182] Loaded profile config "functional-304104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 14:22:12.846386 1931666 config.go:182] Loaded profile config "functional-304104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 14:22:12.846897 1931666 cli_runner.go:164] Run: docker container inspect functional-304104 --format={{.State.Status}}
I0429 14:22:12.870247 1931666 ssh_runner.go:195] Run: systemctl --version
I0429 14:22:12.870300 1931666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-304104
I0429 14:22:12.916785 1931666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35052 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/functional-304104/id_rsa Username:docker}
I0429 14:22:13.009359 1931666 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-304104 image ls --format json --alsologtostderr:
[{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/bus
ybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb","repoDigests":["registry.k8s.io/kube-apiserver@sha256:603450584095e9beb21ab73002fcd49b6e10f6b0194f1e64cca2e3cffa13123e","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"113538528"},{"id":"cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f","repoDigests":["registry.k8s.io/kube-proxy@sha256:a744a3a6db8ed022077d83357b93766fc252bcf01c572b3c3687c80e1e5faa55","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"89133975"},{"id":"20b332c9a70d8516d849d1ac23eff
5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-304104"],"size":"34114467"},{"id":"e664fb1e82890e5cf53c130a021c0333d897bad1f2406eac7edb29cd41df6b10","repoDigests":["docker.io/library/nginx@sha256:1f37baf7373d386ee9de0437325ae3e0202a3959803fd79144fa0bb27e2b2801","docker.io/library/nginx@sha256:fdbfdaea4fc323f44590e9afeb271da8c345a733bf44c4ad7861201676a95f42"],"repoTags":["docker.io/library/nginx:alpine"],"size":"51540272"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDi
gests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe","registry.k8s.io/kube-controller-manager@sha256:63e991c4fc8bdc8fce68c183d152ba3ab560dc0a9b71ff97332a74a7605bbd3f"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"108229958"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests"
:["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"60940831"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0e04e710e758152f5f467
61588d3e712c5b836839443b9c2c2d45ee511b803e9","registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"61568326"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"786a14303c96017fa81cc9756e01811a67bfabba40e5624f453ff2981e501db0","repoDigests":["docker.io/library/nginx@sha256:57cd68207d5a1ebf40d1b686feb8852e6507f4bdbdbe178c5924b9232653a532","docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee"],"repoTags":["docker.io/library/nginx:latest"],"size":"197029840"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@
sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b","registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"140414767"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-304104 image ls --format json --alsologtostderr:
I0429 14:22:12.540321 1931590 out.go:291] Setting OutFile to fd 1 ...
I0429 14:22:12.540503 1931590 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 14:22:12.540517 1931590 out.go:304] Setting ErrFile to fd 2...
I0429 14:22:12.540523 1931590 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 14:22:12.540824 1931590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-1897267/.minikube/bin
I0429 14:22:12.541462 1931590 config.go:182] Loaded profile config "functional-304104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 14:22:12.541621 1931590 config.go:182] Loaded profile config "functional-304104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 14:22:12.542310 1931590 cli_runner.go:164] Run: docker container inspect functional-304104 --format={{.State.Status}}
I0429 14:22:12.563329 1931590 ssh_runner.go:195] Run: systemctl --version
I0429 14:22:12.563398 1931590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-304104
I0429 14:22:12.605566 1931590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35052 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/functional-304104/id_rsa Username:docker}
I0429 14:22:12.698010 1931590 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-304104 image ls --format yaml --alsologtostderr:
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:fde0f6062db0a3b3323d76a4cde031f0f891b5b79d12be642b7e5aad68f2836f
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "60940831"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-304104
size: "34114467"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0e04e710e758152f5f46761588d3e712c5b836839443b9c2c2d45ee511b803e9
- registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "61568326"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:603450584095e9beb21ab73002fcd49b6e10f6b0194f1e64cca2e3cffa13123e
- registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "113538528"
- id: e664fb1e82890e5cf53c130a021c0333d897bad1f2406eac7edb29cd41df6b10
repoDigests:
- docker.io/library/nginx@sha256:1f37baf7373d386ee9de0437325ae3e0202a3959803fd79144fa0bb27e2b2801
- docker.io/library/nginx@sha256:fdbfdaea4fc323f44590e9afeb271da8c345a733bf44c4ad7861201676a95f42
repoTags:
- docker.io/library/nginx:alpine
size: "51540272"
- id: 786a14303c96017fa81cc9756e01811a67bfabba40e5624f453ff2981e501db0
repoDigests:
- docker.io/library/nginx@sha256:57cd68207d5a1ebf40d1b686feb8852e6507f4bdbdbe178c5924b9232653a532
- docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee
repoTags:
- docker.io/library/nginx:latest
size: "197029840"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
- registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "140414767"
- id: 68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe
- registry.k8s.io/kube-controller-manager@sha256:63e991c4fc8bdc8fce68c183d152ba3ab560dc0a9b71ff97332a74a7605bbd3f
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "108229958"
- id: cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f
repoDigests:
- registry.k8s.io/kube-proxy@sha256:a744a3a6db8ed022077d83357b93766fc252bcf01c572b3c3687c80e1e5faa55
- registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "89133975"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-304104 image ls --format yaml --alsologtostderr:
I0429 14:22:12.207703 1931528 out.go:291] Setting OutFile to fd 1 ...
I0429 14:22:12.207901 1931528 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 14:22:12.207915 1931528 out.go:304] Setting ErrFile to fd 2...
I0429 14:22:12.207922 1931528 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 14:22:12.208208 1931528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-1897267/.minikube/bin
I0429 14:22:12.208943 1931528 config.go:182] Loaded profile config "functional-304104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 14:22:12.209117 1931528 config.go:182] Loaded profile config "functional-304104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 14:22:12.209635 1931528 cli_runner.go:164] Run: docker container inspect functional-304104 --format={{.State.Status}}
I0429 14:22:12.238950 1931528 ssh_runner.go:195] Run: systemctl --version
I0429 14:22:12.239098 1931528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-304104
I0429 14:22:12.262076 1931528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35052 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/functional-304104/id_rsa Username:docker}
I0429 14:22:12.353053 1931528 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-304104 ssh pgrep buildkitd: exit status 1 (330.507957ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 image build -t localhost/my-image:functional-304104 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-304104 image build -t localhost/my-image:functional-304104 testdata/build --alsologtostderr: (2.399734212s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-304104 image build -t localhost/my-image:functional-304104 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 28108ecd846
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-304104
--> fa74dc0d9eb
Successfully tagged localhost/my-image:functional-304104
fa74dc0d9eba89cc1881af9b3b09cd56345415019d3198f14279e30e5c20e5f4
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-304104 image build -t localhost/my-image:functional-304104 testdata/build --alsologtostderr:
I0429 14:22:12.831078 1931665 out.go:291] Setting OutFile to fd 1 ...
I0429 14:22:12.832207 1931665 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 14:22:12.832252 1931665 out.go:304] Setting ErrFile to fd 2...
I0429 14:22:12.832281 1931665 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 14:22:12.832566 1931665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-1897267/.minikube/bin
I0429 14:22:12.833336 1931665 config.go:182] Loaded profile config "functional-304104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 14:22:12.833972 1931665 config.go:182] Loaded profile config "functional-304104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 14:22:12.834481 1931665 cli_runner.go:164] Run: docker container inspect functional-304104 --format={{.State.Status}}
I0429 14:22:12.853222 1931665 ssh_runner.go:195] Run: systemctl --version
I0429 14:22:12.853275 1931665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-304104
I0429 14:22:12.871954 1931665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35052 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/functional-304104/id_rsa Username:docker}
I0429 14:22:12.973162 1931665 build_images.go:161] Building image from path: /tmp/build.3142995266.tar
I0429 14:22:12.973242 1931665 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0429 14:22:12.983540 1931665 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3142995266.tar
I0429 14:22:12.986984 1931665 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3142995266.tar: stat -c "%s %y" /var/lib/minikube/build/build.3142995266.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3142995266.tar': No such file or directory
I0429 14:22:12.987009 1931665 ssh_runner.go:362] scp /tmp/build.3142995266.tar --> /var/lib/minikube/build/build.3142995266.tar (3072 bytes)
I0429 14:22:13.021029 1931665 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3142995266
I0429 14:22:13.040047 1931665 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3142995266 -xf /var/lib/minikube/build/build.3142995266.tar
I0429 14:22:13.049202 1931665 crio.go:315] Building image: /var/lib/minikube/build/build.3142995266
I0429 14:22:13.049290 1931665 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-304104 /var/lib/minikube/build/build.3142995266 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0429 14:22:15.123322 1931665 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-304104 /var/lib/minikube/build/build.3142995266 --cgroup-manager=cgroupfs: (2.074005761s)
I0429 14:22:15.123385 1931665 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3142995266
I0429 14:22:15.132933 1931665 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3142995266.tar
I0429 14:22:15.142451 1931665 build_images.go:217] Built localhost/my-image:functional-304104 from /tmp/build.3142995266.tar
I0429 14:22:15.142482 1931665 build_images.go:133] succeeded building to: functional-304104
I0429 14:22:15.142487 1931665 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.488670872s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-304104
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 image load --daemon gcr.io/google-containers/addon-resizer:functional-304104 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-304104 image load --daemon gcr.io/google-containers/addon-resizer:functional-304104 --alsologtostderr: (4.435980997s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 image ls
2024/04/29 14:21:59 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 image load --daemon gcr.io/google-containers/addon-resizer:functional-304104 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-304104 image load --daemon gcr.io/google-containers/addon-resizer:functional-304104 --alsologtostderr: (3.252095038s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.49s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.712230292s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-304104
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 image load --daemon gcr.io/google-containers/addon-resizer:functional-304104 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-304104 image load --daemon gcr.io/google-containers/addon-resizer:functional-304104 --alsologtostderr: (3.719135257s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 image save gcr.io/google-containers/addon-resizer:functional-304104 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 image rm gcr.io/google-containers/addon-resizer:functional-304104 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-304104 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.064733359s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-304104
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-304104 image save --daemon gcr.io/google-containers/addon-resizer:functional-304104 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-304104
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-304104
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-304104
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-304104
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (157.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-581657 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0429 14:23:12.006315 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-581657 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m36.489585674s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (157.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-581657 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-581657 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-581657 -- rollout status deployment/busybox: (4.326257124s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-581657 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-581657 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-581657 -- exec busybox-fc5497c4f-fp22k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-581657 -- exec busybox-fc5497c4f-jpbj7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-581657 -- exec busybox-fc5497c4f-sshpb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-581657 -- exec busybox-fc5497c4f-fp22k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-581657 -- exec busybox-fc5497c4f-jpbj7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-581657 -- exec busybox-fc5497c4f-sshpb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-581657 -- exec busybox-fc5497c4f-fp22k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-581657 -- exec busybox-fc5497c4f-jpbj7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-581657 -- exec busybox-fc5497c4f-sshpb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-581657 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-581657 -- exec busybox-fc5497c4f-fp22k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-581657 -- exec busybox-fc5497c4f-fp22k -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-581657 -- exec busybox-fc5497c4f-jpbj7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-581657 -- exec busybox-fc5497c4f-jpbj7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-581657 -- exec busybox-fc5497c4f-sshpb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-581657 -- exec busybox-fc5497c4f-sshpb -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-581657 -v=7 --alsologtostderr
E0429 14:25:28.162910 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
E0429 14:25:55.847289 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-581657 -v=7 --alsologtostderr: (52.216817134s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-581657 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 cp testdata/cp-test.txt ha-581657:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 cp ha-581657:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3176475453/001/cp-test_ha-581657.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 cp ha-581657:/home/docker/cp-test.txt ha-581657-m02:/home/docker/cp-test_ha-581657_ha-581657-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m02 "sudo cat /home/docker/cp-test_ha-581657_ha-581657-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 cp ha-581657:/home/docker/cp-test.txt ha-581657-m03:/home/docker/cp-test_ha-581657_ha-581657-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m03 "sudo cat /home/docker/cp-test_ha-581657_ha-581657-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 cp ha-581657:/home/docker/cp-test.txt ha-581657-m04:/home/docker/cp-test_ha-581657_ha-581657-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m04 "sudo cat /home/docker/cp-test_ha-581657_ha-581657-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 cp testdata/cp-test.txt ha-581657-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 cp ha-581657-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3176475453/001/cp-test_ha-581657-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 cp ha-581657-m02:/home/docker/cp-test.txt ha-581657:/home/docker/cp-test_ha-581657-m02_ha-581657.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657 "sudo cat /home/docker/cp-test_ha-581657-m02_ha-581657.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 cp ha-581657-m02:/home/docker/cp-test.txt ha-581657-m03:/home/docker/cp-test_ha-581657-m02_ha-581657-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m03 "sudo cat /home/docker/cp-test_ha-581657-m02_ha-581657-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 cp ha-581657-m02:/home/docker/cp-test.txt ha-581657-m04:/home/docker/cp-test_ha-581657-m02_ha-581657-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m04 "sudo cat /home/docker/cp-test_ha-581657-m02_ha-581657-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 cp testdata/cp-test.txt ha-581657-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 cp ha-581657-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3176475453/001/cp-test_ha-581657-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m03 "sudo cat /home/docker/cp-test.txt"
E0429 14:26:10.051968 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
E0429 14:26:10.057202 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
E0429 14:26:10.068361 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
E0429 14:26:10.088629 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
E0429 14:26:10.131198 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
E0429 14:26:10.211477 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 cp ha-581657-m03:/home/docker/cp-test.txt ha-581657:/home/docker/cp-test_ha-581657-m03_ha-581657.txt
E0429 14:26:10.371864 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
E0429 14:26:10.692396 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657 "sudo cat /home/docker/cp-test_ha-581657-m03_ha-581657.txt"
E0429 14:26:11.333029 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 cp ha-581657-m03:/home/docker/cp-test.txt ha-581657-m02:/home/docker/cp-test_ha-581657-m03_ha-581657-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m02 "sudo cat /home/docker/cp-test_ha-581657-m03_ha-581657-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 cp ha-581657-m03:/home/docker/cp-test.txt ha-581657-m04:/home/docker/cp-test_ha-581657-m03_ha-581657-m04.txt
E0429 14:26:12.613202 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m04 "sudo cat /home/docker/cp-test_ha-581657-m03_ha-581657-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 cp testdata/cp-test.txt ha-581657-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 cp ha-581657-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3176475453/001/cp-test_ha-581657-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 cp ha-581657-m04:/home/docker/cp-test.txt ha-581657:/home/docker/cp-test_ha-581657-m04_ha-581657.txt
E0429 14:26:15.174346 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657 "sudo cat /home/docker/cp-test_ha-581657-m04_ha-581657.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 cp ha-581657-m04:/home/docker/cp-test.txt ha-581657-m02:/home/docker/cp-test_ha-581657-m04_ha-581657-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m02 "sudo cat /home/docker/cp-test_ha-581657-m04_ha-581657-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 cp ha-581657-m04:/home/docker/cp-test.txt ha-581657-m03:/home/docker/cp-test_ha-581657-m04_ha-581657-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 ssh -n ha-581657-m03 "sudo cat /home/docker/cp-test_ha-581657-m04_ha-581657-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 node stop m02 -v=7 --alsologtostderr
E0429 14:26:20.295361 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-581657 node stop m02 -v=7 --alsologtostderr: (11.977293497s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 status -v=7 --alsologtostderr
E0429 14:26:30.536501 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-581657 status -v=7 --alsologtostderr: exit status 7 (756.413435ms)

                                                
                                                
-- stdout --
	ha-581657
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-581657-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-581657-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-581657-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 14:26:30.120376 1946532 out.go:291] Setting OutFile to fd 1 ...
	I0429 14:26:30.120648 1946532 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:26:30.120708 1946532 out.go:304] Setting ErrFile to fd 2...
	I0429 14:26:30.120728 1946532 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:26:30.121236 1946532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-1897267/.minikube/bin
	I0429 14:26:30.121566 1946532 out.go:298] Setting JSON to false
	I0429 14:26:30.121631 1946532 mustload.go:65] Loading cluster: ha-581657
	I0429 14:26:30.121719 1946532 notify.go:220] Checking for updates...
	I0429 14:26:30.123696 1946532 config.go:182] Loaded profile config "ha-581657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:26:30.123851 1946532 status.go:255] checking status of ha-581657 ...
	I0429 14:26:30.127602 1946532 cli_runner.go:164] Run: docker container inspect ha-581657 --format={{.State.Status}}
	I0429 14:26:30.146058 1946532 status.go:330] ha-581657 host status = "Running" (err=<nil>)
	I0429 14:26:30.146085 1946532 host.go:66] Checking if "ha-581657" exists ...
	I0429 14:26:30.146518 1946532 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-581657
	I0429 14:26:30.165791 1946532 host.go:66] Checking if "ha-581657" exists ...
	I0429 14:26:30.166168 1946532 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 14:26:30.166232 1946532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657
	I0429 14:26:30.199243 1946532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35057 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/ha-581657/id_rsa Username:docker}
	I0429 14:26:30.290596 1946532 ssh_runner.go:195] Run: systemctl --version
	I0429 14:26:30.295166 1946532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 14:26:30.307474 1946532 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:26:30.373819 1946532 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:72 SystemTime:2024-04-29 14:26:30.362797708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:26:30.374480 1946532 kubeconfig.go:125] found "ha-581657" server: "https://192.168.49.254:8443"
	I0429 14:26:30.374529 1946532 api_server.go:166] Checking apiserver status ...
	I0429 14:26:30.374585 1946532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 14:26:30.386137 1946532 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1361/cgroup
	I0429 14:26:30.396091 1946532 api_server.go:182] apiserver freezer: "12:freezer:/docker/c01b3fbb28813eca464cc45cefddcc7f5af1da2db2412a06939d424a5b6a6b34/crio/crio-49f94b721c2ea74f60df64c5c055335357bed33bd8af3480858e56ae6f6c4d85"
	I0429 14:26:30.396165 1946532 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c01b3fbb28813eca464cc45cefddcc7f5af1da2db2412a06939d424a5b6a6b34/crio/crio-49f94b721c2ea74f60df64c5c055335357bed33bd8af3480858e56ae6f6c4d85/freezer.state
	I0429 14:26:30.405688 1946532 api_server.go:204] freezer state: "THAWED"
	I0429 14:26:30.405717 1946532 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0429 14:26:30.413561 1946532 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0429 14:26:30.413591 1946532 status.go:422] ha-581657 apiserver status = Running (err=<nil>)
	I0429 14:26:30.413603 1946532 status.go:257] ha-581657 status: &{Name:ha-581657 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 14:26:30.413620 1946532 status.go:255] checking status of ha-581657-m02 ...
	I0429 14:26:30.413925 1946532 cli_runner.go:164] Run: docker container inspect ha-581657-m02 --format={{.State.Status}}
	I0429 14:26:30.430545 1946532 status.go:330] ha-581657-m02 host status = "Stopped" (err=<nil>)
	I0429 14:26:30.430572 1946532 status.go:343] host is not running, skipping remaining checks
	I0429 14:26:30.430579 1946532 status.go:257] ha-581657-m02 status: &{Name:ha-581657-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 14:26:30.430600 1946532 status.go:255] checking status of ha-581657-m03 ...
	I0429 14:26:30.430926 1946532 cli_runner.go:164] Run: docker container inspect ha-581657-m03 --format={{.State.Status}}
	I0429 14:26:30.446493 1946532 status.go:330] ha-581657-m03 host status = "Running" (err=<nil>)
	I0429 14:26:30.446528 1946532 host.go:66] Checking if "ha-581657-m03" exists ...
	I0429 14:26:30.446948 1946532 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-581657-m03
	I0429 14:26:30.463260 1946532 host.go:66] Checking if "ha-581657-m03" exists ...
	I0429 14:26:30.463579 1946532 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 14:26:30.463627 1946532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657-m03
	I0429 14:26:30.480864 1946532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35067 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/ha-581657-m03/id_rsa Username:docker}
	I0429 14:26:30.570666 1946532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 14:26:30.585884 1946532 kubeconfig.go:125] found "ha-581657" server: "https://192.168.49.254:8443"
	I0429 14:26:30.585916 1946532 api_server.go:166] Checking apiserver status ...
	I0429 14:26:30.585996 1946532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 14:26:30.598592 1946532 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1331/cgroup
	I0429 14:26:30.609484 1946532 api_server.go:182] apiserver freezer: "12:freezer:/docker/5aeed8f279c073d25532258e9874ee1854942d36c67695133d4e2b708f5e2a02/crio/crio-55d91224093b5999ea70788ee9df9526df2b83a2a85311fcd7b95da3c28089b7"
	I0429 14:26:30.609556 1946532 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5aeed8f279c073d25532258e9874ee1854942d36c67695133d4e2b708f5e2a02/crio/crio-55d91224093b5999ea70788ee9df9526df2b83a2a85311fcd7b95da3c28089b7/freezer.state
	I0429 14:26:30.618420 1946532 api_server.go:204] freezer state: "THAWED"
	I0429 14:26:30.618449 1946532 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0429 14:26:30.627905 1946532 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0429 14:26:30.627941 1946532 status.go:422] ha-581657-m03 apiserver status = Running (err=<nil>)
	I0429 14:26:30.627951 1946532 status.go:257] ha-581657-m03 status: &{Name:ha-581657-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 14:26:30.627970 1946532 status.go:255] checking status of ha-581657-m04 ...
	I0429 14:26:30.628270 1946532 cli_runner.go:164] Run: docker container inspect ha-581657-m04 --format={{.State.Status}}
	I0429 14:26:30.649267 1946532 status.go:330] ha-581657-m04 host status = "Running" (err=<nil>)
	I0429 14:26:30.649290 1946532 host.go:66] Checking if "ha-581657-m04" exists ...
	I0429 14:26:30.649597 1946532 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-581657-m04
	I0429 14:26:30.665808 1946532 host.go:66] Checking if "ha-581657-m04" exists ...
	I0429 14:26:30.666124 1946532 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 14:26:30.666181 1946532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-581657-m04
	I0429 14:26:30.682563 1946532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35072 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/ha-581657-m04/id_rsa Username:docker}
	I0429 14:26:30.778736 1946532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 14:26:30.790808 1946532 status.go:257] ha-581657-m04 status: &{Name:ha-581657-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (20.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-581657 node start m02 -v=7 --alsologtostderr: (19.007778719s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 status -v=7 --alsologtostderr
E0429 14:26:51.017554 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-581657 status -v=7 --alsologtostderr: (1.104793753s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (20.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (6.986841457s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (204.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-581657 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-581657 -v=7 --alsologtostderr
E0429 14:27:31.977811 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-581657 -v=7 --alsologtostderr: (37.075751027s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-581657 --wait=true -v=7 --alsologtostderr
E0429 14:28:53.898849 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-581657 --wait=true -v=7 --alsologtostderr: (2m46.779642749s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-581657
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (204.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (13.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 node delete m03 -v=7 --alsologtostderr
E0429 14:30:28.163141 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-581657 node delete m03 -v=7 --alsologtostderr: (12.15498363s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (13.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 stop -v=7 --alsologtostderr
E0429 14:31:10.052900 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-581657 stop -v=7 --alsologtostderr: (35.74196761s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-581657 status -v=7 --alsologtostderr: exit status 7 (119.677734ms)

                                                
                                                
-- stdout --
	ha-581657
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-581657-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-581657-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 14:31:12.113428 1960581 out.go:291] Setting OutFile to fd 1 ...
	I0429 14:31:12.113550 1960581 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:31:12.113558 1960581 out.go:304] Setting ErrFile to fd 2...
	I0429 14:31:12.113563 1960581 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:31:12.113813 1960581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-1897267/.minikube/bin
	I0429 14:31:12.113994 1960581 out.go:298] Setting JSON to false
	I0429 14:31:12.114024 1960581 mustload.go:65] Loading cluster: ha-581657
	I0429 14:31:12.114123 1960581 notify.go:220] Checking for updates...
	I0429 14:31:12.114463 1960581 config.go:182] Loaded profile config "ha-581657": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:31:12.114476 1960581 status.go:255] checking status of ha-581657 ...
	I0429 14:31:12.114962 1960581 cli_runner.go:164] Run: docker container inspect ha-581657 --format={{.State.Status}}
	I0429 14:31:12.134433 1960581 status.go:330] ha-581657 host status = "Stopped" (err=<nil>)
	I0429 14:31:12.134457 1960581 status.go:343] host is not running, skipping remaining checks
	I0429 14:31:12.134467 1960581 status.go:257] ha-581657 status: &{Name:ha-581657 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 14:31:12.134495 1960581 status.go:255] checking status of ha-581657-m02 ...
	I0429 14:31:12.134825 1960581 cli_runner.go:164] Run: docker container inspect ha-581657-m02 --format={{.State.Status}}
	I0429 14:31:12.150579 1960581 status.go:330] ha-581657-m02 host status = "Stopped" (err=<nil>)
	I0429 14:31:12.150601 1960581 status.go:343] host is not running, skipping remaining checks
	I0429 14:31:12.150609 1960581 status.go:257] ha-581657-m02 status: &{Name:ha-581657-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 14:31:12.150653 1960581 status.go:255] checking status of ha-581657-m04 ...
	I0429 14:31:12.150947 1960581 cli_runner.go:164] Run: docker container inspect ha-581657-m04 --format={{.State.Status}}
	I0429 14:31:12.173694 1960581 status.go:330] ha-581657-m04 host status = "Stopped" (err=<nil>)
	I0429 14:31:12.173715 1960581 status.go:343] host is not running, skipping remaining checks
	I0429 14:31:12.173722 1960581 status.go:257] ha-581657-m04 status: &{Name:ha-581657-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (61.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-581657 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-581657 --control-plane -v=7 --alsologtostderr: (1m0.152914901s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-581657 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (61.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.78s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.94s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-549715 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0429 14:35:28.162940 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-549715 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m15.940384559s)
--- PASS: TestJSONOutput/start/Command (75.94s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-549715 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-549715 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-549715 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-549715 --output=json --user=testUser: (5.80098315s)
--- PASS: TestJSONOutput/stop/Command (5.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-866767 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-866767 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (88.740023ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"89a44681-a0d7-4ee8-9867-6b2be0ffaf2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-866767] minikube v1.33.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c69a3f0-9c6e-435e-ac24-db004a8d5adf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18771"}}
	{"specversion":"1.0","id":"a9840f39-3266-48d3-a5a6-a3ddbf9eec72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"899c1b8f-a1cd-4b39-85eb-3e9faada1191","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18771-1897267/kubeconfig"}}
	{"specversion":"1.0","id":"bf5db7d4-570c-49e2-ab74-44f634a15e0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-1897267/.minikube"}}
	{"specversion":"1.0","id":"a81d4275-944b-40d2-a1dd-9821a8eda1d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"15e8ab58-8cd6-46ec-92d0-e85cecd52a4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"80a261a3-b188-4cd1-adce-828679abab66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-866767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-866767
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.72s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-016003 --network=
E0429 14:36:10.052475 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-016003 --network=: (37.587762161s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-016003" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-016003
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-016003: (2.11354963s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.72s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.7s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-649846 --network=bridge
E0429 14:36:51.208403 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-649846 --network=bridge: (32.681212276s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-649846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-649846
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-649846: (1.996557415s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.70s)

                                                
                                    
x
+
TestKicExistingNetwork (33.91s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-111257 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-111257 --network=existing-network: (31.77915203s)
helpers_test.go:175: Cleaning up "existing-network-111257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-111257
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-111257: (1.977521062s)
--- PASS: TestKicExistingNetwork (33.91s)

                                                
                                    
x
+
TestKicCustomSubnet (33.18s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-382140 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-382140 --subnet=192.168.60.0/24: (31.032273034s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-382140 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-382140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-382140
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-382140: (2.128781092s)
--- PASS: TestKicCustomSubnet (33.18s)

                                                
                                    
x
+
TestKicStaticIP (33.18s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-527679 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-527679 --static-ip=192.168.200.200: (30.914078124s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-527679 ip
helpers_test.go:175: Cleaning up "static-ip-527679" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-527679
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-527679: (2.10076614s)
--- PASS: TestKicStaticIP (33.18s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (64.86s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-488363 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-488363 --driver=docker  --container-runtime=crio: (29.897208591s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-491071 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-491071 --driver=docker  --container-runtime=crio: (29.740893866s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-488363
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-491071
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-491071" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-491071
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-491071: (1.961611251s)
helpers_test.go:175: Cleaning up "first-488363" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-488363
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-488363: (2.051782471s)
--- PASS: TestMinikubeProfile (64.86s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-774059 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-774059 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.761891922s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-774059 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-787868 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-787868 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.299287088s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-787868 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-774059 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-774059 --alsologtostderr -v=5: (1.626776032s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-787868 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-787868
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-787868: (1.207794516s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.63s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-787868
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-787868: (6.625467919s)
--- PASS: TestMountStart/serial/RestartStopped (7.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-787868 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (121.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-688861 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0429 14:40:28.162787 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
E0429 14:41:10.052146 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-688861 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m0.991677991s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (121.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688861 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688861 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-688861 -- rollout status deployment/busybox: (3.84583368s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688861 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688861 -- get pods -o jsonpath='{.items[*].metadata.name}'
E0429 14:42:33.099898 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688861 -- exec busybox-fc5497c4f-gjcn2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688861 -- exec busybox-fc5497c4f-nprgf -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688861 -- exec busybox-fc5497c4f-gjcn2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688861 -- exec busybox-fc5497c4f-nprgf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688861 -- exec busybox-fc5497c4f-gjcn2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688861 -- exec busybox-fc5497c4f-nprgf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.87s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688861 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688861 -- exec busybox-fc5497c4f-gjcn2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688861 -- exec busybox-fc5497c4f-gjcn2 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688861 -- exec busybox-fc5497c4f-nprgf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-688861 -- exec busybox-fc5497c4f-nprgf -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (21.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-688861 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-688861 -v 3 --alsologtostderr: (21.238109643s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (21.89s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-688861 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 cp testdata/cp-test.txt multinode-688861:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 ssh -n multinode-688861 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 cp multinode-688861:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile202534784/001/cp-test_multinode-688861.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 ssh -n multinode-688861 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 cp multinode-688861:/home/docker/cp-test.txt multinode-688861-m02:/home/docker/cp-test_multinode-688861_multinode-688861-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 ssh -n multinode-688861 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 ssh -n multinode-688861-m02 "sudo cat /home/docker/cp-test_multinode-688861_multinode-688861-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 cp multinode-688861:/home/docker/cp-test.txt multinode-688861-m03:/home/docker/cp-test_multinode-688861_multinode-688861-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 ssh -n multinode-688861 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 ssh -n multinode-688861-m03 "sudo cat /home/docker/cp-test_multinode-688861_multinode-688861-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 cp testdata/cp-test.txt multinode-688861-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 ssh -n multinode-688861-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 cp multinode-688861-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile202534784/001/cp-test_multinode-688861-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 ssh -n multinode-688861-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 cp multinode-688861-m02:/home/docker/cp-test.txt multinode-688861:/home/docker/cp-test_multinode-688861-m02_multinode-688861.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 ssh -n multinode-688861-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 ssh -n multinode-688861 "sudo cat /home/docker/cp-test_multinode-688861-m02_multinode-688861.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 cp multinode-688861-m02:/home/docker/cp-test.txt multinode-688861-m03:/home/docker/cp-test_multinode-688861-m02_multinode-688861-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 ssh -n multinode-688861-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 ssh -n multinode-688861-m03 "sudo cat /home/docker/cp-test_multinode-688861-m02_multinode-688861-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 cp testdata/cp-test.txt multinode-688861-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 ssh -n multinode-688861-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 cp multinode-688861-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile202534784/001/cp-test_multinode-688861-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 ssh -n multinode-688861-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 cp multinode-688861-m03:/home/docker/cp-test.txt multinode-688861:/home/docker/cp-test_multinode-688861-m03_multinode-688861.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 ssh -n multinode-688861-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 ssh -n multinode-688861 "sudo cat /home/docker/cp-test_multinode-688861-m03_multinode-688861.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 cp multinode-688861-m03:/home/docker/cp-test.txt multinode-688861-m02:/home/docker/cp-test_multinode-688861-m03_multinode-688861-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 ssh -n multinode-688861-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 ssh -n multinode-688861-m02 "sudo cat /home/docker/cp-test_multinode-688861-m03_multinode-688861-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-688861 node stop m03: (1.218847066s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-688861 status: exit status 7 (528.668389ms)

                                                
                                                
-- stdout --
	multinode-688861
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-688861-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-688861-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-688861 status --alsologtostderr: exit status 7 (521.250293ms)

                                                
                                                
-- stdout --
	multinode-688861
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-688861-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-688861-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 14:43:10.143743 2011800 out.go:291] Setting OutFile to fd 1 ...
	I0429 14:43:10.143888 2011800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:43:10.143953 2011800 out.go:304] Setting ErrFile to fd 2...
	I0429 14:43:10.143966 2011800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:43:10.144262 2011800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-1897267/.minikube/bin
	I0429 14:43:10.144536 2011800 out.go:298] Setting JSON to false
	I0429 14:43:10.144581 2011800 mustload.go:65] Loading cluster: multinode-688861
	I0429 14:43:10.144702 2011800 notify.go:220] Checking for updates...
	I0429 14:43:10.145086 2011800 config.go:182] Loaded profile config "multinode-688861": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:43:10.145100 2011800 status.go:255] checking status of multinode-688861 ...
	I0429 14:43:10.145595 2011800 cli_runner.go:164] Run: docker container inspect multinode-688861 --format={{.State.Status}}
	I0429 14:43:10.165984 2011800 status.go:330] multinode-688861 host status = "Running" (err=<nil>)
	I0429 14:43:10.166012 2011800 host.go:66] Checking if "multinode-688861" exists ...
	I0429 14:43:10.166423 2011800 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-688861
	I0429 14:43:10.184434 2011800 host.go:66] Checking if "multinode-688861" exists ...
	I0429 14:43:10.184859 2011800 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 14:43:10.184929 2011800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-688861
	I0429 14:43:10.206199 2011800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35177 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/multinode-688861/id_rsa Username:docker}
	I0429 14:43:10.294273 2011800 ssh_runner.go:195] Run: systemctl --version
	I0429 14:43:10.298533 2011800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 14:43:10.310322 2011800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:43:10.376217 2011800 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-04-29 14:43:10.365486282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:43:10.376830 2011800 kubeconfig.go:125] found "multinode-688861" server: "https://192.168.67.2:8443"
	I0429 14:43:10.376872 2011800 api_server.go:166] Checking apiserver status ...
	I0429 14:43:10.376918 2011800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 14:43:10.388376 2011800 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup
	I0429 14:43:10.398092 2011800 api_server.go:182] apiserver freezer: "12:freezer:/docker/a9155b8b20638548022263c51704353a6b490029f07153eced3f6bfe30d45431/crio/crio-dad8c3e7b1a80bbc9d9a10589cecfd1f6bba5a1ef672a3564bf4d4de3ec5d0f1"
	I0429 14:43:10.398159 2011800 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a9155b8b20638548022263c51704353a6b490029f07153eced3f6bfe30d45431/crio/crio-dad8c3e7b1a80bbc9d9a10589cecfd1f6bba5a1ef672a3564bf4d4de3ec5d0f1/freezer.state
	I0429 14:43:10.407520 2011800 api_server.go:204] freezer state: "THAWED"
	I0429 14:43:10.407556 2011800 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0429 14:43:10.415653 2011800 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0429 14:43:10.415684 2011800 status.go:422] multinode-688861 apiserver status = Running (err=<nil>)
	I0429 14:43:10.415696 2011800 status.go:257] multinode-688861 status: &{Name:multinode-688861 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 14:43:10.415739 2011800 status.go:255] checking status of multinode-688861-m02 ...
	I0429 14:43:10.416065 2011800 cli_runner.go:164] Run: docker container inspect multinode-688861-m02 --format={{.State.Status}}
	I0429 14:43:10.432392 2011800 status.go:330] multinode-688861-m02 host status = "Running" (err=<nil>)
	I0429 14:43:10.432473 2011800 host.go:66] Checking if "multinode-688861-m02" exists ...
	I0429 14:43:10.432819 2011800 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-688861-m02
	I0429 14:43:10.450517 2011800 host.go:66] Checking if "multinode-688861-m02" exists ...
	I0429 14:43:10.450819 2011800 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 14:43:10.450863 2011800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-688861-m02
	I0429 14:43:10.468471 2011800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35182 SSHKeyPath:/home/jenkins/minikube-integration/18771-1897267/.minikube/machines/multinode-688861-m02/id_rsa Username:docker}
	I0429 14:43:10.553864 2011800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 14:43:10.566181 2011800 status.go:257] multinode-688861-m02 status: &{Name:multinode-688861-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0429 14:43:10.566225 2011800 status.go:255] checking status of multinode-688861-m03 ...
	I0429 14:43:10.566563 2011800 cli_runner.go:164] Run: docker container inspect multinode-688861-m03 --format={{.State.Status}}
	I0429 14:43:10.587971 2011800 status.go:330] multinode-688861-m03 host status = "Stopped" (err=<nil>)
	I0429 14:43:10.587996 2011800 status.go:343] host is not running, skipping remaining checks
	I0429 14:43:10.588003 2011800 status.go:257] multinode-688861-m03 status: &{Name:multinode-688861-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-688861 node start m03 -v=7 --alsologtostderr: (8.897713298s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-688861
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-688861
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-688861: (24.812476535s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-688861 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-688861 --wait=true -v=8 --alsologtostderr: (56.869967683s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-688861
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-688861 node delete m03: (4.620636813s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-688861 stop: (23.781593402s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-688861 status: exit status 7 (99.586408ms)

                                                
                                                
-- stdout --
	multinode-688861
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-688861-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-688861 status --alsologtostderr: exit status 7 (100.847905ms)

                                                
                                                
-- stdout --
	multinode-688861
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-688861-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 14:45:11.301498 2018861 out.go:291] Setting OutFile to fd 1 ...
	I0429 14:45:11.301728 2018861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:45:11.301755 2018861 out.go:304] Setting ErrFile to fd 2...
	I0429 14:45:11.301773 2018861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:45:11.302043 2018861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-1897267/.minikube/bin
	I0429 14:45:11.302259 2018861 out.go:298] Setting JSON to false
	I0429 14:45:11.302309 2018861 mustload.go:65] Loading cluster: multinode-688861
	I0429 14:45:11.302396 2018861 notify.go:220] Checking for updates...
	I0429 14:45:11.303571 2018861 config.go:182] Loaded profile config "multinode-688861": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:45:11.303617 2018861 status.go:255] checking status of multinode-688861 ...
	I0429 14:45:11.304157 2018861 cli_runner.go:164] Run: docker container inspect multinode-688861 --format={{.State.Status}}
	I0429 14:45:11.319668 2018861 status.go:330] multinode-688861 host status = "Stopped" (err=<nil>)
	I0429 14:45:11.319686 2018861 status.go:343] host is not running, skipping remaining checks
	I0429 14:45:11.319694 2018861 status.go:257] multinode-688861 status: &{Name:multinode-688861 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 14:45:11.319723 2018861 status.go:255] checking status of multinode-688861-m02 ...
	I0429 14:45:11.320013 2018861 cli_runner.go:164] Run: docker container inspect multinode-688861-m02 --format={{.State.Status}}
	I0429 14:45:11.337979 2018861 status.go:330] multinode-688861-m02 host status = "Stopped" (err=<nil>)
	I0429 14:45:11.338003 2018861 status.go:343] host is not running, skipping remaining checks
	I0429 14:45:11.338027 2018861 status.go:257] multinode-688861-m02 status: &{Name:multinode-688861-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-688861 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0429 14:45:28.162750 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-688861 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (54.980069588s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-688861 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.63s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-688861
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-688861-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-688861-m02 --driver=docker  --container-runtime=crio: exit status 14 (95.889043ms)

                                                
                                                
-- stdout --
	* [multinode-688861-m02] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18771-1897267/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-1897267/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-688861-m02' is duplicated with machine name 'multinode-688861-m02' in profile 'multinode-688861'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-688861-m03 --driver=docker  --container-runtime=crio
E0429 14:46:10.052443 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-688861-m03 --driver=docker  --container-runtime=crio: (31.89950649s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-688861
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-688861: exit status 80 (321.555296ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-688861 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-688861-m03 already exists in multinode-688861-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-688861-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-688861-m03: (1.929227704s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.31s)

                                                
                                    
x
+
TestPreload (127.88s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-554468 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-554468 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m23.003458151s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-554468 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-554468 image pull gcr.io/k8s-minikube/busybox: (1.78691971s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-554468
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-554468: (5.825937426s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-554468 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-554468 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (34.726469332s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-554468 image list
helpers_test.go:175: Cleaning up "test-preload-554468" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-554468
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-554468: (2.305961299s)
--- PASS: TestPreload (127.88s)

                                                
                                    
x
+
TestScheduledStopUnix (106.37s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-315560 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-315560 --memory=2048 --driver=docker  --container-runtime=crio: (29.312682103s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-315560 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-315560 -n scheduled-stop-315560
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-315560 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-315560 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-315560 -n scheduled-stop-315560
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-315560
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-315560 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0429 14:50:28.163635 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-315560
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-315560: exit status 7 (87.164453ms)

                                                
                                                
-- stdout --
	scheduled-stop-315560
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-315560 -n scheduled-stop-315560
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-315560 -n scheduled-stop-315560: exit status 7 (76.995768ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-315560" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-315560
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-315560: (5.444236589s)
--- PASS: TestScheduledStopUnix (106.37s)

                                                
                                    
x
+
TestInsufficientStorage (10.97s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-395368 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-395368 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.479271761s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e3ca1aae-3dda-4d83-96fd-a25890b5026a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-395368] minikube v1.33.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b941b07d-7d0e-48e3-8880-1e5959f5f607","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18771"}}
	{"specversion":"1.0","id":"bde3b41c-8086-47a9-8f10-6f335f483437","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c0308f1f-a69f-4525-b336-46b7168a38ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18771-1897267/kubeconfig"}}
	{"specversion":"1.0","id":"d0c12bee-18ae-4c4d-adb1-bed326fedf52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-1897267/.minikube"}}
	{"specversion":"1.0","id":"c341d50d-0af4-497e-a101-c9a0addc029c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4177cc6d-fb30-43ed-a6b6-8b4106b3e1be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4cc45dc3-181c-4630-ad31-86a77a045bc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"db8242af-79db-4df7-948f-6d7c99220043","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"48b85cbd-cdd8-47f7-8fae-b5b03c9d0d6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d720d1a7-a1d4-4028-bb31-ec33e3910095","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"95f300b0-130f-49ce-bfb3-50da07525606","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-395368\" primary control-plane node in \"insufficient-storage-395368\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5f4df1a0-ecb2-45c8-b96a-440c98aea3e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-1713736339-18706 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4a08211-2636-439d-a2ae-936c2ffbc21e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"84b22537-9d7a-4290-98c2-4dd0809d199a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-395368 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-395368 --output=json --layout=cluster: exit status 7 (277.473668ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-395368","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-395368","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 14:50:48.332829 2035354 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-395368" does not appear in /home/jenkins/minikube-integration/18771-1897267/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-395368 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-395368 --output=json --layout=cluster: exit status 7 (287.999505ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-395368","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-395368","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 14:50:48.621935 2035408 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-395368" does not appear in /home/jenkins/minikube-integration/18771-1897267/kubeconfig
	E0429 14:50:48.631748 2035408 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/insufficient-storage-395368/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-395368" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-395368
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-395368: (1.922698891s)
--- PASS: TestInsufficientStorage (10.97s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (72.6s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1481590963 start -p running-upgrade-195173 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1481590963 start -p running-upgrade-195173 --memory=2200 --vm-driver=docker  --container-runtime=crio: (33.484761927s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-195173 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0429 14:55:28.162757 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-195173 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.529277136s)
helpers_test.go:175: Cleaning up "running-upgrade-195173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-195173
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-195173: (3.103287159s)
--- PASS: TestRunningBinaryUpgrade (72.60s)

                                                
                                    
x
+
TestKubernetesUpgrade (384.94s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-960980 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-960980 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m4.248133395s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-960980
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-960980: (2.270276804s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-960980 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-960980 status --format={{.Host}}: exit status 7 (111.985232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-960980 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-960980 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m44.437053113s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-960980 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-960980 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-960980 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (137.805796ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-960980] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18771-1897267/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-1897267/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-960980
	    minikube start -p kubernetes-upgrade-960980 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9609802 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-960980 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-960980 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-960980 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.233150535s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-960980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-960980
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-960980: (2.387050562s)
--- PASS: TestKubernetesUpgrade (384.94s)

                                                
                                    
x
+
TestMissingContainerUpgrade (160.12s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.945731548 start -p missing-upgrade-828310 --memory=2200 --driver=docker  --container-runtime=crio
E0429 14:51:10.052363 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.945731548 start -p missing-upgrade-828310 --memory=2200 --driver=docker  --container-runtime=crio: (1m20.121844051s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-828310
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-828310: (10.461724405s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-828310
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-828310 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-828310 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m6.365546287s)
helpers_test.go:175: Cleaning up "missing-upgrade-828310" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-828310
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-828310: (1.969445269s)
--- PASS: TestMissingContainerUpgrade (160.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-991714 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-991714 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (94.882712ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-991714] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18771-1897267/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-1897267/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-991714 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-991714 --driver=docker  --container-runtime=crio: (37.273775752s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-991714 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-991714 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-991714 --no-kubernetes --driver=docker  --container-runtime=crio: (5.849113024s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-991714 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-991714 status -o json: exit status 2 (460.685778ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-991714","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-991714
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-991714: (2.168753964s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-991714 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-991714 --no-kubernetes --driver=docker  --container-runtime=crio: (9.382408438s)
--- PASS: TestNoKubernetes/serial/Start (9.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-991714 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-991714 "sudo systemctl is-active --quiet service kubelet": exit status 1 (347.564466ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-991714
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-991714: (1.265906099s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-991714 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-991714 --driver=docker  --container-runtime=crio: (8.239358312s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-991714 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-991714 "sudo systemctl is-active --quiet service kubelet": exit status 1 (395.351314ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E0429 14:53:31.208625 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
--- PASS: TestStoppedBinaryUpgrade/Setup (1.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (76.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1428222459 start -p stopped-upgrade-518259 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1428222459 start -p stopped-upgrade-518259 --memory=2200 --vm-driver=docker  --container-runtime=crio: (34.999086127s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1428222459 -p stopped-upgrade-518259 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1428222459 -p stopped-upgrade-518259 stop: (2.849585343s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-518259 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-518259 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.995320754s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (76.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-518259
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-518259: (1.434008907s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.43s)

                                                
                                    
x
+
TestPause/serial/Start (54.8s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-432914 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0429 14:56:10.052743 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-432914 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (54.797474085s)
--- PASS: TestPause/serial/Start (54.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-444971 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-444971 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (280.671585ms)

                                                
                                                
-- stdout --
	* [false-444971] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18771
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18771-1897267/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-1897267/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 14:58:21.255918 2074713 out.go:291] Setting OutFile to fd 1 ...
	I0429 14:58:21.256121 2074713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:58:21.256133 2074713 out.go:304] Setting ErrFile to fd 2...
	I0429 14:58:21.256139 2074713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 14:58:21.256391 2074713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18771-1897267/.minikube/bin
	I0429 14:58:21.256823 2074713 out.go:298] Setting JSON to false
	I0429 14:58:21.257820 2074713 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":38446,"bootTime":1714364256,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0429 14:58:21.257898 2074713 start.go:139] virtualization:  
	I0429 14:58:21.261596 2074713 out.go:177] * [false-444971] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0429 14:58:21.263176 2074713 out.go:177]   - MINIKUBE_LOCATION=18771
	I0429 14:58:21.263270 2074713 notify.go:220] Checking for updates...
	I0429 14:58:21.265125 2074713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 14:58:21.267128 2074713 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18771-1897267/kubeconfig
	I0429 14:58:21.269067 2074713 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18771-1897267/.minikube
	I0429 14:58:21.271449 2074713 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0429 14:58:21.273384 2074713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 14:58:21.275790 2074713 config.go:182] Loaded profile config "kubernetes-upgrade-960980": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 14:58:21.275887 2074713 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 14:58:21.316766 2074713 docker.go:122] docker version: linux-26.1.0:Docker Engine - Community
	I0429 14:58:21.316889 2074713 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0429 14:58:21.415799 2074713 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-29 14:58:21.406685752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0429 14:58:21.415908 2074713 docker.go:295] overlay module found
	I0429 14:58:21.418228 2074713 out.go:177] * Using the docker driver based on user configuration
	I0429 14:58:21.419928 2074713 start.go:297] selected driver: docker
	I0429 14:58:21.419943 2074713 start.go:901] validating driver "docker" against <nil>
	I0429 14:58:21.419957 2074713 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 14:58:21.422395 2074713 out.go:177] 
	W0429 14:58:21.423967 2074713 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0429 14:58:21.425594 2074713 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-444971 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-444971

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-444971

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-444971

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-444971

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-444971

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-444971

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-444971

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-444971

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-444971

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-444971

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-444971

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-444971" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-444971" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-444971

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-444971"

                                                
                                                
----------------------- debugLogs end: false-444971 [took: 4.720258793s] --------------------------------
helpers_test.go:175: Cleaning up "false-444971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-444971
--- PASS: TestNetworkPlugins/group/false (5.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (170.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-058397 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0429 15:00:28.163973 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
E0429 15:01:10.052521 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-058397 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m50.988364644s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (170.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-058397 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6b81454c-ec76-41d9-95b7-cfd67b00af9b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6b81454c-ec76-41d9-95b7-cfd67b00af9b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004478636s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-058397 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (65.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-143107 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-143107 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (1m5.066954659s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (65.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-058397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-058397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.359596616s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-058397 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-058397 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-058397 --alsologtostderr -v=3: (14.840414352s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (14.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-058397 -n old-k8s-version-058397
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-058397 -n old-k8s-version-058397: exit status 7 (274.113611ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-058397 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (142.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-058397 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-058397 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m22.101370535s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-058397 -n old-k8s-version-058397
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (142.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-143107 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d22b52f6-8d52-4a6a-a676-0cfe7fcc6b5c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d22b52f6-8d52-4a6a-a676-0cfe7fcc6b5c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003644759s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-143107 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-143107 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-143107 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.423202739s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-143107 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-143107 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-143107 --alsologtostderr -v=3: (12.689005497s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-143107 -n no-preload-143107
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-143107 -n no-preload-143107: exit status 7 (84.984671ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-143107 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (279.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-143107 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-143107 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (4m39.006929316s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-143107 -n no-preload-143107
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (279.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7xlq8" [211cc306-cf94-4706-98e8-7d690d68834d] Running
E0429 15:05:28.163576 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00510953s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7xlq8" [211cc306-cf94-4706-98e8-7d690d68834d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004135494s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-058397 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-058397 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-058397 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-058397 -n old-k8s-version-058397
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-058397 -n old-k8s-version-058397: exit status 2 (325.394907ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-058397 -n old-k8s-version-058397
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-058397 -n old-k8s-version-058397: exit status 2 (314.452378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-058397 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-058397 -n old-k8s-version-058397
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-058397 -n old-k8s-version-058397
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (74.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-828213 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
E0429 15:06:10.051785 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-828213 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (1m14.843271042s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (74.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-828213 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2d2f54ec-0cd5-4b4e-9074-a33d16daff83] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2d2f54ec-0cd5-4b4e-9074-a33d16daff83] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003892277s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-828213 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-828213 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-828213 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-828213 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-828213 --alsologtostderr -v=3: (11.976881164s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-828213 -n embed-certs-828213
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-828213 -n embed-certs-828213: exit status 7 (83.196938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-828213 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (288.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-828213 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
E0429 15:07:37.638361 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/old-k8s-version-058397/client.crt: no such file or directory
E0429 15:07:37.643667 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/old-k8s-version-058397/client.crt: no such file or directory
E0429 15:07:37.653970 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/old-k8s-version-058397/client.crt: no such file or directory
E0429 15:07:37.674276 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/old-k8s-version-058397/client.crt: no such file or directory
E0429 15:07:37.714670 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/old-k8s-version-058397/client.crt: no such file or directory
E0429 15:07:37.794823 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/old-k8s-version-058397/client.crt: no such file or directory
E0429 15:07:37.955185 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/old-k8s-version-058397/client.crt: no such file or directory
E0429 15:07:38.275331 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/old-k8s-version-058397/client.crt: no such file or directory
E0429 15:07:38.915953 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/old-k8s-version-058397/client.crt: no such file or directory
E0429 15:07:40.196189 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/old-k8s-version-058397/client.crt: no such file or directory
E0429 15:07:42.757215 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/old-k8s-version-058397/client.crt: no such file or directory
E0429 15:07:47.878385 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/old-k8s-version-058397/client.crt: no such file or directory
E0429 15:07:58.118815 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/old-k8s-version-058397/client.crt: no such file or directory
E0429 15:08:18.599687 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/old-k8s-version-058397/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-828213 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (4m47.58893888s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-828213 -n embed-certs-828213
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (288.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-p2cjd" [3405279e-14a0-4913-9d67-ddf99ef0233b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004012069s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-p2cjd" [3405279e-14a0-4913-9d67-ddf99ef0233b] Running
E0429 15:08:59.560419 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/old-k8s-version-058397/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005674669s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-143107 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-143107 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-143107 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-143107 -n no-preload-143107
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-143107 -n no-preload-143107: exit status 2 (314.833987ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-143107 -n no-preload-143107
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-143107 -n no-preload-143107: exit status 2 (318.124482ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-143107 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-143107 -n no-preload-143107
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-143107 -n no-preload-143107
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-936039 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-936039 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (52.836873485s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-936039 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a94cd20d-3214-46f3-827b-a2b0a42ced8b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a94cd20d-3214-46f3-827b-a2b0a42ced8b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004268304s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-936039 exec busybox -- /bin/sh -c "ulimit -n"
E0429 15:10:11.213048 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-936039 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-936039 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-936039 --alsologtostderr -v=3
E0429 15:10:21.480569 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/old-k8s-version-058397/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-936039 --alsologtostderr -v=3: (11.973608507s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-936039 -n default-k8s-diff-port-936039
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-936039 -n default-k8s-diff-port-936039: exit status 7 (80.287727ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-936039 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-936039 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
E0429 15:10:28.162909 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
E0429 15:11:10.052437 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-936039 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (4m27.331613263s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-936039 -n default-k8s-diff-port-936039
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-vpkgg" [997d3f75-61dc-4793-9219-167177d14d1c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00357495s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-vpkgg" [997d3f75-61dc-4793-9219-167177d14d1c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003602948s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-828213 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-828213 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-828213 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-828213 -n embed-certs-828213
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-828213 -n embed-certs-828213: exit status 2 (327.50398ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-828213 -n embed-certs-828213
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-828213 -n embed-certs-828213: exit status 2 (329.205723ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-828213 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-828213 -n embed-certs-828213
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-828213 -n embed-certs-828213
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-149738 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
E0429 15:12:37.638455 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/old-k8s-version-058397/client.crt: no such file or directory
E0429 15:13:05.321355 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/old-k8s-version-058397/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-149738 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (47.816460421s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-149738 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-149738 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.091545711s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-149738 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-149738 --alsologtostderr -v=3: (1.298182477s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-149738 -n newest-cni-149738
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-149738 -n newest-cni-149738: exit status 7 (77.838749ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-149738 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-149738 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-149738 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.0: (17.928846863s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-149738 -n newest-cni-149738
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-149738 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-149738 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-149738 -n newest-cni-149738
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-149738 -n newest-cni-149738: exit status 2 (312.726247ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-149738 -n newest-cni-149738
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-149738 -n newest-cni-149738: exit status 2 (317.750201ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-149738 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-149738 -n newest-cni-149738
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-149738 -n newest-cni-149738
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (77.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-444971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0429 15:13:48.343453 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/no-preload-143107/client.crt: no such file or directory
E0429 15:13:48.349068 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/no-preload-143107/client.crt: no such file or directory
E0429 15:13:48.359287 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/no-preload-143107/client.crt: no such file or directory
E0429 15:13:48.379590 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/no-preload-143107/client.crt: no such file or directory
E0429 15:13:48.419812 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/no-preload-143107/client.crt: no such file or directory
E0429 15:13:48.499954 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/no-preload-143107/client.crt: no such file or directory
E0429 15:13:48.660155 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/no-preload-143107/client.crt: no such file or directory
E0429 15:13:48.980959 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/no-preload-143107/client.crt: no such file or directory
E0429 15:13:49.622060 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/no-preload-143107/client.crt: no such file or directory
E0429 15:13:50.903616 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/no-preload-143107/client.crt: no such file or directory
E0429 15:13:53.464550 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/no-preload-143107/client.crt: no such file or directory
E0429 15:13:58.584823 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/no-preload-143107/client.crt: no such file or directory
E0429 15:14:08.825530 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/no-preload-143107/client.crt: no such file or directory
E0429 15:14:29.306736 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/no-preload-143107/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-444971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m17.140293267s)
--- PASS: TestNetworkPlugins/group/auto/Start (77.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-b6qfl" [e06db134-7f36-4159-9b90-24cdb8a15396] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003937835s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-b6qfl" [e06db134-7f36-4159-9b90-24cdb8a15396] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0037265s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-936039 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-444971 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-444971 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cb9bq" [34d4d395-96ea-4a75-aaed-377689475bbe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-cb9bq" [34d4d395-96ea-4a75-aaed-377689475bbe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.005406959s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-936039 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-936039 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-936039 -n default-k8s-diff-port-936039
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-936039 -n default-k8s-diff-port-936039: exit status 2 (313.456256ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-936039 -n default-k8s-diff-port-936039
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-936039 -n default-k8s-diff-port-936039: exit status 2 (325.016904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-936039 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-936039 -n default-k8s-diff-port-936039
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-936039 -n default-k8s-diff-port-936039
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.04s)
E0429 15:20:43.082128 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/default-k8s-diff-port-936039/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-444971 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-444971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-444971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-444971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0429 15:15:10.267707 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/no-preload-143107/client.crt: no such file or directory
E0429 15:15:28.163205 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/addons-457090/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-444971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m24.891035276s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-444971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0429 15:15:53.100848 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
E0429 15:16:10.052083 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
E0429 15:16:32.188788 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/no-preload-143107/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-444971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m13.761432969s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-wpcd8" [1ce495fc-e425-4cc5-9387-b078e9ef7b6c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004344622s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-444971 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-444971 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-xm9dk" [4fe9a010-9294-4a66-928a-2c5b570dd5ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-xm9dk" [4fe9a010-9294-4a66-928a-2c5b570dd5ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003631535s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-wjwxf" [418634c3-8746-4d33-9d48-0483f13d51f5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005651042s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-444971 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-444971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-444971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-444971 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-444971 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-w2zdk" [dbaafebc-44ca-4f45-8b85-d6a9d358c062] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-w2zdk" [dbaafebc-44ca-4f45-8b85-d6a9d358c062] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004162471s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-444971 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-444971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-444971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (77.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-444971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-444971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m17.423462403s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (77.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-444971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0429 15:17:37.638712 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/old-k8s-version-058397/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-444971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m30.322750873s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-444971 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-444971 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7k58d" [db78365d-7b82-417f-ae87-d057ceeab765] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7k58d" [db78365d-7b82-417f-ae87-d057ceeab765] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003544981s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-444971 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-444971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-444971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-444971 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-444971 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-prw2f" [379df377-f179-427c-97ee-fc525973f99c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-prw2f" [379df377-f179-427c-97ee-fc525973f99c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.006548549s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-444971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-444971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (58.645944533s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-444971 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-444971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-444971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (86.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-444971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0429 15:19:59.359137 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/auto-444971/client.crt: no such file or directory
E0429 15:19:59.364400 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/auto-444971/client.crt: no such file or directory
E0429 15:19:59.374792 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/auto-444971/client.crt: no such file or directory
E0429 15:19:59.395032 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/auto-444971/client.crt: no such file or directory
E0429 15:19:59.435257 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/auto-444971/client.crt: no such file or directory
E0429 15:19:59.515506 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/auto-444971/client.crt: no such file or directory
E0429 15:19:59.675852 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/auto-444971/client.crt: no such file or directory
E0429 15:19:59.996393 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/auto-444971/client.crt: no such file or directory
E0429 15:20:00.636813 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/auto-444971/client.crt: no such file or directory
E0429 15:20:01.917937 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/auto-444971/client.crt: no such file or directory
E0429 15:20:02.120344 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/default-k8s-diff-port-936039/client.crt: no such file or directory
E0429 15:20:02.125570 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/default-k8s-diff-port-936039/client.crt: no such file or directory
E0429 15:20:02.135787 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/default-k8s-diff-port-936039/client.crt: no such file or directory
E0429 15:20:02.156005 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/default-k8s-diff-port-936039/client.crt: no such file or directory
E0429 15:20:02.196253 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/default-k8s-diff-port-936039/client.crt: no such file or directory
E0429 15:20:02.276458 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/default-k8s-diff-port-936039/client.crt: no such file or directory
E0429 15:20:02.436957 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/default-k8s-diff-port-936039/client.crt: no such file or directory
E0429 15:20:02.757268 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/default-k8s-diff-port-936039/client.crt: no such file or directory
E0429 15:20:03.398183 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/default-k8s-diff-port-936039/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-444971 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m26.402745195s)
--- PASS: TestNetworkPlugins/group/bridge/Start (86.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nqn6f" [f92cec4a-d2d5-4653-a826-8f1ea1ae288e] Running
E0429 15:20:04.478551 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/auto-444971/client.crt: no such file or directory
E0429 15:20:04.678750 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/default-k8s-diff-port-936039/client.crt: no such file or directory
E0429 15:20:07.239945 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/default-k8s-diff-port-936039/client.crt: no such file or directory
E0429 15:20:09.599132 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/auto-444971/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00419182s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-444971 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-444971 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-w549v" [5076e2bf-bfd7-463f-a29d-71d2e779e377] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0429 15:20:12.360602 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/default-k8s-diff-port-936039/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-w549v" [5076e2bf-bfd7-463f-a29d-71d2e779e377] Running
E0429 15:20:19.840311 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/auto-444971/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003446619s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-444971 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-444971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-444971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-444971 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-444971 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cqk2w" [f6fb3675-f7eb-47da-a915-7e42eae49d6f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-cqk2w" [f6fb3675-f7eb-47da-a915-7e42eae49d6f] Running
E0429 15:21:10.052299 1902684 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/functional-304104/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003159141s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-444971 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-444971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-444971 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (29/321)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-259064 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-259064" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-259064
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-646710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-646710
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-444971 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-444971

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-444971

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-444971

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-444971

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-444971

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-444971

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-444971

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-444971

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-444971

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-444971

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-444971

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-444971" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-444971" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18771-1897267/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Apr 2024 14:57:57 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-960980
contexts:
- context:
cluster: kubernetes-upgrade-960980
extensions:
- extension:
last-update: Mon, 29 Apr 2024 14:57:57 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: context_info
namespace: default
user: kubernetes-upgrade-960980
name: kubernetes-upgrade-960980
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-960980
user:
client-certificate: /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/kubernetes-upgrade-960980/client.crt
client-key: /home/jenkins/minikube-integration/18771-1897267/.minikube/profiles/kubernetes-upgrade-960980/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-444971

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-444971"

                                                
                                                
----------------------- debugLogs end: kubenet-444971 [took: 4.583891668s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-444971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-444971
--- SKIP: TestNetworkPlugins/group/kubenet (4.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-444971 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-444971

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-444971

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-444971

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-444971

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-444971

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-444971

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-444971

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-444971

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-444971

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-444971

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-444971

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-444971" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-444971

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-444971

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-444971

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-444971

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-444971" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-444971" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-444971

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-444971" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-444971"

                                                
                                                
----------------------- debugLogs end: cilium-444971 [took: 6.036928531s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-444971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-444971
--- SKIP: TestNetworkPlugins/group/cilium (6.24s)

                                                
                                    
Copied to clipboard