Test Report: Docker_Linux_crio_arm64 19910

                    
                      0805a48cef53763875eefc0e18e5d59dcaccd8a0:2024-11-05:36955
                    
                

Test fail (3/330)

Order failed test Duration
36 TestAddons/parallel/Ingress 150.92
38 TestAddons/parallel/MetricsServer 319.53
173 TestMultiControlPlane/serial/DeleteSecondaryNode 16.67
x
+
TestAddons/parallel/Ingress (150.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-638421 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-638421 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-638421 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a31558bc-06be-4d59-9f70-2840351c65dc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a31558bc-06be-4d59-9f70-2840351c65dc] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003017875s
I1105 17:51:57.676893  285188 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-638421 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-638421 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.145848894s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-638421 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-638421 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-638421
helpers_test.go:235: (dbg) docker inspect addons-638421:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bac0cd0c5efa5a79d742a82bac2bd1e6028ef79211194d383e14238cfebc209a",
	        "Created": "2024-11-05T17:47:05.571332234Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 286449,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-11-05T17:47:05.685071812Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b9c385cbd7184c9dd77d4bc379a996635e559e337cc53655e2d39219017c804c",
	        "ResolvConfPath": "/var/lib/docker/containers/bac0cd0c5efa5a79d742a82bac2bd1e6028ef79211194d383e14238cfebc209a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bac0cd0c5efa5a79d742a82bac2bd1e6028ef79211194d383e14238cfebc209a/hostname",
	        "HostsPath": "/var/lib/docker/containers/bac0cd0c5efa5a79d742a82bac2bd1e6028ef79211194d383e14238cfebc209a/hosts",
	        "LogPath": "/var/lib/docker/containers/bac0cd0c5efa5a79d742a82bac2bd1e6028ef79211194d383e14238cfebc209a/bac0cd0c5efa5a79d742a82bac2bd1e6028ef79211194d383e14238cfebc209a-json.log",
	        "Name": "/addons-638421",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-638421:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-638421",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8a9d29c12ef0d73dc50a3806d04930eac49ee2882249fa752cfae930d1715387-init/diff:/var/lib/docker/overlay2/f1c041cd086a3a2db4f768b1c920339fb85fb20492664e0532c0f72dc744887a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8a9d29c12ef0d73dc50a3806d04930eac49ee2882249fa752cfae930d1715387/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8a9d29c12ef0d73dc50a3806d04930eac49ee2882249fa752cfae930d1715387/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8a9d29c12ef0d73dc50a3806d04930eac49ee2882249fa752cfae930d1715387/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-638421",
	                "Source": "/var/lib/docker/volumes/addons-638421/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-638421",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-638421",
	                "name.minikube.sigs.k8s.io": "addons-638421",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "157bed46083984150cbf1f529a89c97d1d867f744909202dd525796c530d526f",
	            "SandboxKey": "/var/run/docker/netns/157bed460839",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-638421": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ba6034dd16840d908bc849e487ad0dfe7211406fbccbcd6ae357274076dd616b",
	                    "EndpointID": "001ab74b758066e7c297271b89b32f78f9a9a09c0ca31c083ce12b068e0d626f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-638421",
	                        "bac0cd0c5efa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-638421 -n addons-638421
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-638421 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-638421 logs -n 25: (1.519471498s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC | 05 Nov 24 17:46 UTC |
	| delete  | -p download-only-931410              | download-only-931410   | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC | 05 Nov 24 17:46 UTC |
	| start   | -o=json --download-only              | download-only-080457   | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC |                     |
	|         | -p download-only-080457              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC | 05 Nov 24 17:46 UTC |
	| delete  | -p download-only-080457              | download-only-080457   | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC | 05 Nov 24 17:46 UTC |
	| delete  | -p download-only-931410              | download-only-931410   | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC | 05 Nov 24 17:46 UTC |
	| delete  | -p download-only-080457              | download-only-080457   | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC | 05 Nov 24 17:46 UTC |
	| start   | --download-only -p                   | download-docker-346323 | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC |                     |
	|         | download-docker-346323               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-346323            | download-docker-346323 | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC | 05 Nov 24 17:46 UTC |
	| start   | --download-only -p                   | binary-mirror-032774   | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC |                     |
	|         | binary-mirror-032774                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34655               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-032774              | binary-mirror-032774   | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC | 05 Nov 24 17:46 UTC |
	| addons  | disable dashboard -p                 | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC |                     |
	|         | addons-638421                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC |                     |
	|         | addons-638421                        |                        |         |         |                     |                     |
	| start   | -p addons-638421 --wait=true         | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC | 05 Nov 24 17:50 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	| addons  | addons-638421 addons disable         | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:50 UTC | 05 Nov 24 17:50 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-638421 addons disable         | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:50 UTC | 05 Nov 24 17:50 UTC |
	|         | gcp-auth --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:50 UTC | 05 Nov 24 17:50 UTC |
	|         | -p addons-638421                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-638421 addons disable         | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:50 UTC | 05 Nov 24 17:50 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-638421 ip                     | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:50 UTC | 05 Nov 24 17:50 UTC |
	| addons  | addons-638421 addons disable         | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:50 UTC | 05 Nov 24 17:50 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-638421 addons                 | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:51 UTC | 05 Nov 24 17:51 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-638421 addons                 | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:51 UTC | 05 Nov 24 17:51 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-638421 addons                 | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:51 UTC | 05 Nov 24 17:51 UTC |
	|         | disable inspektor-gadget             |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ssh     | addons-638421 ssh curl -s            | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:51 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-638421 ip                     | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:54 UTC | 05 Nov 24 17:54 UTC |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 17:46:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 17:46:41.761718  285958 out.go:345] Setting OutFile to fd 1 ...
	I1105 17:46:41.761934  285958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:46:41.761962  285958 out.go:358] Setting ErrFile to fd 2...
	I1105 17:46:41.761981  285958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:46:41.762344  285958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-279806/.minikube/bin
	I1105 17:46:41.763442  285958 out.go:352] Setting JSON to false
	I1105 17:46:41.764316  285958 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5345,"bootTime":1730823457,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1105 17:46:41.764416  285958 start.go:139] virtualization:  
	I1105 17:46:41.766507  285958 out.go:177] * [addons-638421] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1105 17:46:41.767693  285958 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 17:46:41.767755  285958 notify.go:220] Checking for updates...
	I1105 17:46:41.770222  285958 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 17:46:41.771600  285958 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-279806/kubeconfig
	I1105 17:46:41.773029  285958 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-279806/.minikube
	I1105 17:46:41.775080  285958 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1105 17:46:41.776127  285958 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 17:46:41.777499  285958 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 17:46:41.796526  285958 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1105 17:46:41.796681  285958 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 17:46:41.854919  285958 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-11-05 17:46:41.845190001 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1105 17:46:41.855033  285958 docker.go:318] overlay module found
	I1105 17:46:41.857011  285958 out.go:177] * Using the docker driver based on user configuration
	I1105 17:46:41.858144  285958 start.go:297] selected driver: docker
	I1105 17:46:41.858158  285958 start.go:901] validating driver "docker" against <nil>
	I1105 17:46:41.858171  285958 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 17:46:41.858897  285958 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 17:46:41.913044  285958 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-11-05 17:46:41.903513589 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1105 17:46:41.913246  285958 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 17:46:41.913478  285958 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 17:46:41.914836  285958 out.go:177] * Using Docker driver with root privileges
	I1105 17:46:41.916043  285958 cni.go:84] Creating CNI manager for ""
	I1105 17:46:41.916103  285958 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1105 17:46:41.916115  285958 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1105 17:46:41.916192  285958 start.go:340] cluster config:
	{Name:addons-638421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-638421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 17:46:41.917525  285958 out.go:177] * Starting "addons-638421" primary control-plane node in "addons-638421" cluster
	I1105 17:46:41.918887  285958 cache.go:121] Beginning downloading kic base image for docker with crio
	I1105 17:46:41.920056  285958 out.go:177] * Pulling base image v0.0.45-1730282848-19883 ...
	I1105 17:46:41.921280  285958 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 17:46:41.921327  285958 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-279806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
	I1105 17:46:41.921339  285958 cache.go:56] Caching tarball of preloaded images
	I1105 17:46:41.921369  285958 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local docker daemon
	I1105 17:46:41.921425  285958 preload.go:172] Found /home/jenkins/minikube-integration/19910-279806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1105 17:46:41.921435  285958 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 17:46:41.921766  285958 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/config.json ...
	I1105 17:46:41.921793  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/config.json: {Name:mkc3898952e36435b36cca750d84ae737452ee78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:46:41.936333  285958 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 to local cache
	I1105 17:46:41.936464  285958 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local cache directory
	I1105 17:46:41.936483  285958 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local cache directory, skipping pull
	I1105 17:46:41.936487  285958 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 exists in cache, skipping pull
	I1105 17:46:41.936494  285958 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 as a tarball
	I1105 17:46:41.936500  285958 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 from local cache
	I1105 17:46:58.792525  285958 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 from cached tarball
	I1105 17:46:58.792571  285958 cache.go:194] Successfully downloaded all kic artifacts
	I1105 17:46:58.792635  285958 start.go:360] acquireMachinesLock for addons-638421: {Name:mk11f83312d48db3dadab7544a97d20493370375 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 17:46:58.792750  285958 start.go:364] duration metric: took 92.89µs to acquireMachinesLock for "addons-638421"
	I1105 17:46:58.792781  285958 start.go:93] Provisioning new machine with config: &{Name:addons-638421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-638421 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 17:46:58.792865  285958 start.go:125] createHost starting for "" (driver="docker")
	I1105 17:46:58.794387  285958 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1105 17:46:58.794637  285958 start.go:159] libmachine.API.Create for "addons-638421" (driver="docker")
	I1105 17:46:58.794672  285958 client.go:168] LocalClient.Create starting
	I1105 17:46:58.794792  285958 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem
	I1105 17:46:59.021243  285958 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem
	I1105 17:46:59.369774  285958 cli_runner.go:164] Run: docker network inspect addons-638421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1105 17:46:59.383351  285958 cli_runner.go:211] docker network inspect addons-638421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1105 17:46:59.383447  285958 network_create.go:284] running [docker network inspect addons-638421] to gather additional debugging logs...
	I1105 17:46:59.383468  285958 cli_runner.go:164] Run: docker network inspect addons-638421
	W1105 17:46:59.397087  285958 cli_runner.go:211] docker network inspect addons-638421 returned with exit code 1
	I1105 17:46:59.397115  285958 network_create.go:287] error running [docker network inspect addons-638421]: docker network inspect addons-638421: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-638421 not found
	I1105 17:46:59.397139  285958 network_create.go:289] output of [docker network inspect addons-638421]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-638421 not found
	
	** /stderr **
	I1105 17:46:59.397243  285958 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1105 17:46:59.411963  285958 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b191f0}
	I1105 17:46:59.412010  285958 network_create.go:124] attempt to create docker network addons-638421 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1105 17:46:59.412069  285958 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-638421 addons-638421
	I1105 17:46:59.477179  285958 network_create.go:108] docker network addons-638421 192.168.49.0/24 created
	I1105 17:46:59.477211  285958 kic.go:121] calculated static IP "192.168.49.2" for the "addons-638421" container
	I1105 17:46:59.477286  285958 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1105 17:46:59.490302  285958 cli_runner.go:164] Run: docker volume create addons-638421 --label name.minikube.sigs.k8s.io=addons-638421 --label created_by.minikube.sigs.k8s.io=true
	I1105 17:46:59.507116  285958 oci.go:103] Successfully created a docker volume addons-638421
	I1105 17:46:59.507199  285958 cli_runner.go:164] Run: docker run --rm --name addons-638421-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-638421 --entrypoint /usr/bin/test -v addons-638421:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 -d /var/lib
	I1105 17:47:01.518950  285958 cli_runner.go:217] Completed: docker run --rm --name addons-638421-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-638421 --entrypoint /usr/bin/test -v addons-638421:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 -d /var/lib: (2.011703603s)
	I1105 17:47:01.518980  285958 oci.go:107] Successfully prepared a docker volume addons-638421
	I1105 17:47:01.519012  285958 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 17:47:01.519032  285958 kic.go:194] Starting extracting preloaded images to volume ...
	I1105 17:47:01.519107  285958 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19910-279806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-638421:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 -I lz4 -xf /preloaded.tar -C /extractDir
	I1105 17:47:05.512133  285958 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19910-279806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-638421:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 -I lz4 -xf /preloaded.tar -C /extractDir: (3.992986229s)
	I1105 17:47:05.512166  285958 kic.go:203] duration metric: took 3.993130024s to extract preloaded images to volume ...
	W1105 17:47:05.512328  285958 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1105 17:47:05.512449  285958 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1105 17:47:05.556893  285958 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-638421 --name addons-638421 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-638421 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-638421 --network addons-638421 --ip 192.168.49.2 --volume addons-638421:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4
	I1105 17:47:05.868579  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Running}}
	I1105 17:47:05.891442  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:05.919827  285958 cli_runner.go:164] Run: docker exec addons-638421 stat /var/lib/dpkg/alternatives/iptables
	I1105 17:47:05.990441  285958 oci.go:144] the created container "addons-638421" has a running status.
	I1105 17:47:05.990527  285958 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa...
	I1105 17:47:06.224308  285958 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1105 17:47:06.252911  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:06.286105  285958 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1105 17:47:06.286126  285958 kic_runner.go:114] Args: [docker exec --privileged addons-638421 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1105 17:47:06.365080  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:06.386587  285958 machine.go:93] provisionDockerMachine start ...
	I1105 17:47:06.386689  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:06.412968  285958 main.go:141] libmachine: Using SSH client type: native
	I1105 17:47:06.413240  285958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1105 17:47:06.413249  285958 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 17:47:06.413921  285958 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1105 17:47:09.531942  285958 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-638421
	
	I1105 17:47:09.531966  285958 ubuntu.go:169] provisioning hostname "addons-638421"
	I1105 17:47:09.532033  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:09.552762  285958 main.go:141] libmachine: Using SSH client type: native
	I1105 17:47:09.553009  285958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1105 17:47:09.553027  285958 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-638421 && echo "addons-638421" | sudo tee /etc/hostname
	I1105 17:47:09.683636  285958 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-638421
	
	I1105 17:47:09.683718  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:09.699942  285958 main.go:141] libmachine: Using SSH client type: native
	I1105 17:47:09.700190  285958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1105 17:47:09.700213  285958 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-638421' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-638421/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-638421' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 17:47:09.820406  285958 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 17:47:09.820438  285958 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19910-279806/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-279806/.minikube}
	I1105 17:47:09.820468  285958 ubuntu.go:177] setting up certificates
	I1105 17:47:09.820479  285958 provision.go:84] configureAuth start
	I1105 17:47:09.820544  285958 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-638421
	I1105 17:47:09.837546  285958 provision.go:143] copyHostCerts
	I1105 17:47:09.837633  285958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-279806/.minikube/cert.pem (1123 bytes)
	I1105 17:47:09.837777  285958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-279806/.minikube/key.pem (1679 bytes)
	I1105 17:47:09.837846  285958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-279806/.minikube/ca.pem (1078 bytes)
	I1105 17:47:09.837906  285958 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca-key.pem org=jenkins.addons-638421 san=[127.0.0.1 192.168.49.2 addons-638421 localhost minikube]
	I1105 17:47:10.586317  285958 provision.go:177] copyRemoteCerts
	I1105 17:47:10.586420  285958 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 17:47:10.586479  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:10.604454  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:10.697807  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 17:47:10.720745  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1105 17:47:10.744323  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 17:47:10.767387  285958 provision.go:87] duration metric: took 946.881723ms to configureAuth
	I1105 17:47:10.767457  285958 ubuntu.go:193] setting minikube options for container-runtime
	I1105 17:47:10.767664  285958 config.go:182] Loaded profile config "addons-638421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 17:47:10.767786  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:10.784208  285958 main.go:141] libmachine: Using SSH client type: native
	I1105 17:47:10.784470  285958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1105 17:47:10.784491  285958 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 17:47:11.000719  285958 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 17:47:11.000743  285958 machine.go:96] duration metric: took 4.61413714s to provisionDockerMachine
	I1105 17:47:11.000755  285958 client.go:171] duration metric: took 12.206077013s to LocalClient.Create
	I1105 17:47:11.000774  285958 start.go:167] duration metric: took 12.206137822s to libmachine.API.Create "addons-638421"
	I1105 17:47:11.000785  285958 start.go:293] postStartSetup for "addons-638421" (driver="docker")
	I1105 17:47:11.000800  285958 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 17:47:11.000878  285958 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 17:47:11.000931  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:11.018295  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:11.110540  285958 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 17:47:11.114113  285958 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1105 17:47:11.114157  285958 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1105 17:47:11.114168  285958 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1105 17:47:11.114180  285958 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1105 17:47:11.114195  285958 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-279806/.minikube/addons for local assets ...
	I1105 17:47:11.114267  285958 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-279806/.minikube/files for local assets ...
	I1105 17:47:11.114298  285958 start.go:296] duration metric: took 113.503361ms for postStartSetup
	I1105 17:47:11.114623  285958 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-638421
	I1105 17:47:11.131435  285958 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/config.json ...
	I1105 17:47:11.131722  285958 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 17:47:11.131766  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:11.148205  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:11.233960  285958 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1105 17:47:11.238376  285958 start.go:128] duration metric: took 12.445496117s to createHost
	I1105 17:47:11.238402  285958 start.go:83] releasing machines lock for "addons-638421", held for 12.445636753s
	I1105 17:47:11.238477  285958 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-638421
	I1105 17:47:11.255490  285958 ssh_runner.go:195] Run: cat /version.json
	I1105 17:47:11.255549  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:11.255793  285958 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 17:47:11.255869  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:11.273443  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:11.287089  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:11.359919  285958 ssh_runner.go:195] Run: systemctl --version
	I1105 17:47:11.491873  285958 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 17:47:11.635716  285958 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 17:47:11.640093  285958 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 17:47:11.660024  285958 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1105 17:47:11.660108  285958 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 17:47:11.694552  285958 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1105 17:47:11.694584  285958 start.go:495] detecting cgroup driver to use...
	I1105 17:47:11.694618  285958 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1105 17:47:11.694690  285958 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 17:47:11.713139  285958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 17:47:11.725948  285958 docker.go:217] disabling cri-docker service (if available) ...
	I1105 17:47:11.726014  285958 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 17:47:11.741011  285958 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 17:47:11.756862  285958 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 17:47:11.845196  285958 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 17:47:11.940836  285958 docker.go:233] disabling docker service ...
	I1105 17:47:11.940904  285958 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 17:47:11.960736  285958 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 17:47:11.972166  285958 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 17:47:12.063966  285958 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 17:47:12.158434  285958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 17:47:12.170405  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 17:47:12.186411  285958 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 17:47:12.186489  285958 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:47:12.195952  285958 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 17:47:12.196035  285958 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:47:12.205847  285958 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:47:12.215664  285958 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:47:12.225088  285958 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 17:47:12.234213  285958 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:47:12.243658  285958 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:47:12.259504  285958 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:47:12.269709  285958 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 17:47:12.278550  285958 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 17:47:12.287055  285958 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 17:47:12.372681  285958 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 17:47:12.486734  285958 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 17:47:12.486895  285958 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 17:47:12.490499  285958 start.go:563] Will wait 60s for crictl version
	I1105 17:47:12.490570  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:47:12.494692  285958 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 17:47:12.533081  285958 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1105 17:47:12.533181  285958 ssh_runner.go:195] Run: crio --version
	I1105 17:47:12.571577  285958 ssh_runner.go:195] Run: crio --version
	I1105 17:47:12.609043  285958 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1105 17:47:12.610320  285958 cli_runner.go:164] Run: docker network inspect addons-638421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1105 17:47:12.625644  285958 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1105 17:47:12.629315  285958 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 17:47:12.640038  285958 kubeadm.go:883] updating cluster {Name:addons-638421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-638421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 17:47:12.640170  285958 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 17:47:12.640229  285958 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 17:47:12.718531  285958 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 17:47:12.718557  285958 crio.go:433] Images already preloaded, skipping extraction
	I1105 17:47:12.718611  285958 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 17:47:12.757651  285958 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 17:47:12.757673  285958 cache_images.go:84] Images are preloaded, skipping loading
	I1105 17:47:12.757681  285958 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1105 17:47:12.757772  285958 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-638421 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-638421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 17:47:12.757859  285958 ssh_runner.go:195] Run: crio config
	I1105 17:47:12.813091  285958 cni.go:84] Creating CNI manager for ""
	I1105 17:47:12.813112  285958 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1105 17:47:12.813122  285958 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 17:47:12.813145  285958 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-638421 NodeName:addons-638421 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 17:47:12.813278  285958 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-638421"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 17:47:12.813350  285958 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 17:47:12.821923  285958 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 17:47:12.821998  285958 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 17:47:12.830421  285958 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1105 17:47:12.848363  285958 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 17:47:12.866525  285958 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1105 17:47:12.883613  285958 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1105 17:47:12.887037  285958 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 17:47:12.897614  285958 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 17:47:12.984474  285958 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 17:47:12.997540  285958 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421 for IP: 192.168.49.2
	I1105 17:47:12.997562  285958 certs.go:194] generating shared ca certs ...
	I1105 17:47:12.997579  285958 certs.go:226] acquiring lock for ca certs: {Name:mk7e394808202081d7250bf8ad59a3f119279ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:12.997700  285958 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.key
	I1105 17:47:13.727210  285958 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt ...
	I1105 17:47:13.727284  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt: {Name:mkf1106f42f4bd8b4e9cc0c09cf43e224d6e4d77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:13.727499  285958 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-279806/.minikube/ca.key ...
	I1105 17:47:13.727538  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/ca.key: {Name:mk70791accfe1ce1ee535bb8717477a0b263e077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:13.728161  285958 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.key
	I1105 17:47:14.043503  285958 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.crt ...
	I1105 17:47:14.043543  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.crt: {Name:mkb9e298515dcba1584664fd6752a7c87593fd93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:14.043752  285958 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.key ...
	I1105 17:47:14.043766  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.key: {Name:mkfabb63e26b0da996b5cde4c5ac31decabeaf9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:14.043848  285958 certs.go:256] generating profile certs ...
	I1105 17:47:14.043948  285958 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.key
	I1105 17:47:14.043967  285958 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt with IP's: []
	I1105 17:47:14.233310  285958 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt ...
	I1105 17:47:14.233350  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: {Name:mk1d5b6c538ba9338a12a3484f12513b45bd70ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:14.233539  285958 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.key ...
	I1105 17:47:14.233553  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.key: {Name:mk17f6f5a6828ae04d86564391c29b09b2849add Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:14.234123  285958 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.key.6136f6be
	I1105 17:47:14.234154  285958 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.crt.6136f6be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1105 17:47:15.042502  285958 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.crt.6136f6be ...
	I1105 17:47:15.042539  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.crt.6136f6be: {Name:mkab983ff08f02b24e234d0f10aaba5016e18b1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:15.042742  285958 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.key.6136f6be ...
	I1105 17:47:15.042759  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.key.6136f6be: {Name:mk63d3a13d47642ed23e104d9b25369657e35819 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:15.042853  285958 certs.go:381] copying /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.crt.6136f6be -> /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.crt
	I1105 17:47:15.042943  285958 certs.go:385] copying /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.key.6136f6be -> /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.key
	I1105 17:47:15.043026  285958 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/proxy-client.key
	I1105 17:47:15.043051  285958 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/proxy-client.crt with IP's: []
	I1105 17:47:15.788296  285958 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/proxy-client.crt ...
	I1105 17:47:15.788330  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/proxy-client.crt: {Name:mkda310dc6b34ccb2fe27b446ae3b24645ee5362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:15.788519  285958 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/proxy-client.key ...
	I1105 17:47:15.788533  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/proxy-client.key: {Name:mkf7e666d49ad0feba5515de915e6a1270ef2c39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:15.788759  285958 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 17:47:15.788802  285958 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem (1078 bytes)
	I1105 17:47:15.788833  285958 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem (1123 bytes)
	I1105 17:47:15.788865  285958 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem (1679 bytes)
	I1105 17:47:15.789466  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 17:47:15.815522  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 17:47:15.840674  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 17:47:15.866178  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 17:47:15.890225  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1105 17:47:15.913946  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 17:47:15.937020  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 17:47:15.965051  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 17:47:16.000755  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 17:47:16.033733  285958 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 17:47:16.052384  285958 ssh_runner.go:195] Run: openssl version
	I1105 17:47:16.058042  285958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 17:47:16.067905  285958 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 17:47:16.071591  285958 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:47 /usr/share/ca-certificates/minikubeCA.pem
	I1105 17:47:16.071689  285958 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 17:47:16.078824  285958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 17:47:16.088525  285958 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 17:47:16.092018  285958 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 17:47:16.092097  285958 kubeadm.go:392] StartCluster: {Name:addons-638421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-638421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 17:47:16.092201  285958 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 17:47:16.092267  285958 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 17:47:16.130283  285958 cri.go:89] found id: ""
	I1105 17:47:16.130399  285958 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 17:47:16.139239  285958 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 17:47:16.148284  285958 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1105 17:47:16.148378  285958 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 17:47:16.157060  285958 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 17:47:16.157082  285958 kubeadm.go:157] found existing configuration files:
	
	I1105 17:47:16.157157  285958 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 17:47:16.166021  285958 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 17:47:16.166089  285958 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 17:47:16.174553  285958 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 17:47:16.183626  285958 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 17:47:16.183718  285958 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 17:47:16.192093  285958 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 17:47:16.201231  285958 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 17:47:16.201318  285958 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 17:47:16.209513  285958 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 17:47:16.218356  285958 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 17:47:16.218420  285958 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 17:47:16.226808  285958 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1105 17:47:16.267014  285958 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 17:47:16.267185  285958 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 17:47:16.287335  285958 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1105 17:47:16.287410  285958 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1071-aws
	I1105 17:47:16.287450  285958 kubeadm.go:310] OS: Linux
	I1105 17:47:16.287501  285958 kubeadm.go:310] CGROUPS_CPU: enabled
	I1105 17:47:16.287554  285958 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1105 17:47:16.287603  285958 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1105 17:47:16.287655  285958 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1105 17:47:16.287706  285958 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1105 17:47:16.287761  285958 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1105 17:47:16.287809  285958 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1105 17:47:16.287861  285958 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1105 17:47:16.287910  285958 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1105 17:47:16.344112  285958 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 17:47:16.344300  285958 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 17:47:16.344452  285958 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 17:47:16.352910  285958 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 17:47:16.355627  285958 out.go:235]   - Generating certificates and keys ...
	I1105 17:47:16.355805  285958 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 17:47:16.355910  285958 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 17:47:16.899212  285958 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1105 17:47:17.316557  285958 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1105 17:47:18.092012  285958 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1105 17:47:18.343114  285958 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1105 17:47:18.892066  285958 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1105 17:47:18.892399  285958 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-638421 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1105 17:47:19.304773  285958 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1105 17:47:19.305110  285958 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-638421 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1105 17:47:19.681655  285958 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1105 17:47:20.165817  285958 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1105 17:47:20.341606  285958 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1105 17:47:20.341939  285958 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 17:47:21.057527  285958 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 17:47:21.648347  285958 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 17:47:22.357901  285958 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 17:47:22.621691  285958 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 17:47:22.926682  285958 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 17:47:22.927502  285958 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 17:47:22.932509  285958 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 17:47:22.934214  285958 out.go:235]   - Booting up control plane ...
	I1105 17:47:22.934311  285958 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 17:47:22.934388  285958 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 17:47:22.935474  285958 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 17:47:22.944765  285958 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 17:47:22.951090  285958 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 17:47:22.951145  285958 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 17:47:23.044002  285958 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 17:47:23.044138  285958 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 17:47:24.045502  285958 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001636577s
	I1105 17:47:24.045593  285958 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 17:47:30.047912  285958 kubeadm.go:310] [api-check] The API server is healthy after 6.002383879s
	I1105 17:47:30.069330  285958 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 17:47:30.085527  285958 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 17:47:30.115355  285958 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 17:47:30.115560  285958 kubeadm.go:310] [mark-control-plane] Marking the node addons-638421 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 17:47:30.127547  285958 kubeadm.go:310] [bootstrap-token] Using token: rsv0a1.q27lp5o52vrw8wgr
	I1105 17:47:30.130380  285958 out.go:235]   - Configuring RBAC rules ...
	I1105 17:47:30.130535  285958 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 17:47:30.134622  285958 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 17:47:30.143923  285958 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 17:47:30.150003  285958 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 17:47:30.154238  285958 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 17:47:30.158158  285958 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 17:47:30.454634  285958 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 17:47:30.931775  285958 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 17:47:31.454571  285958 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 17:47:31.455604  285958 kubeadm.go:310] 
	I1105 17:47:31.455683  285958 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 17:47:31.455690  285958 kubeadm.go:310] 
	I1105 17:47:31.455767  285958 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 17:47:31.455771  285958 kubeadm.go:310] 
	I1105 17:47:31.455797  285958 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 17:47:31.455862  285958 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 17:47:31.455914  285958 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 17:47:31.455919  285958 kubeadm.go:310] 
	I1105 17:47:31.455972  285958 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 17:47:31.455977  285958 kubeadm.go:310] 
	I1105 17:47:31.456024  285958 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 17:47:31.456032  285958 kubeadm.go:310] 
	I1105 17:47:31.456083  285958 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 17:47:31.456158  285958 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 17:47:31.456227  285958 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 17:47:31.456231  285958 kubeadm.go:310] 
	I1105 17:47:31.456314  285958 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 17:47:31.456391  285958 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 17:47:31.456396  285958 kubeadm.go:310] 
	I1105 17:47:31.456479  285958 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rsv0a1.q27lp5o52vrw8wgr \
	I1105 17:47:31.456583  285958 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e7145c6c1814668d016f7eaa1b0396fc58dc6956712e65f29fc86a3e27d67eb \
	I1105 17:47:31.456622  285958 kubeadm.go:310] 	--control-plane 
	I1105 17:47:31.456628  285958 kubeadm.go:310] 
	I1105 17:47:31.456713  285958 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 17:47:31.456717  285958 kubeadm.go:310] 
	I1105 17:47:31.456803  285958 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rsv0a1.q27lp5o52vrw8wgr \
	I1105 17:47:31.456906  285958 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e7145c6c1814668d016f7eaa1b0396fc58dc6956712e65f29fc86a3e27d67eb 
	I1105 17:47:31.461045  285958 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1071-aws\n", err: exit status 1
	I1105 17:47:31.461159  285958 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 17:47:31.461175  285958 cni.go:84] Creating CNI manager for ""
	I1105 17:47:31.461184  285958 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1105 17:47:31.464098  285958 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1105 17:47:31.466997  285958 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1105 17:47:31.470722  285958 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1105 17:47:31.470744  285958 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1105 17:47:31.488265  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1105 17:47:31.763631  285958 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 17:47:31.763768  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:47:31.763852  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-638421 minikube.k8s.io/updated_at=2024_11_05T17_47_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=addons-638421 minikube.k8s.io/primary=true
	I1105 17:47:31.771708  285958 ops.go:34] apiserver oom_adj: -16
	I1105 17:47:31.897543  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:47:32.398359  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:47:32.898411  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:47:33.398331  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:47:33.898202  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:47:34.398282  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:47:34.897726  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:47:35.397920  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:47:35.898506  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:47:35.989022  285958 kubeadm.go:1113] duration metric: took 4.225299232s to wait for elevateKubeSystemPrivileges
	I1105 17:47:35.989051  285958 kubeadm.go:394] duration metric: took 19.896984389s to StartCluster
	I1105 17:47:35.989068  285958 settings.go:142] acquiring lock: {Name:mk4446dbaea3bd85b9adc705341ee771323ec865 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:35.989199  285958 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-279806/kubeconfig
	I1105 17:47:35.990064  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/kubeconfig: {Name:mk94e1e77f14516629f7a9763439bf1ac2a3fdbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:35.993401  285958 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 17:47:35.993835  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1105 17:47:35.994240  285958 config.go:182] Loaded profile config "addons-638421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 17:47:35.994293  285958 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1105 17:47:35.994380  285958 addons.go:69] Setting yakd=true in profile "addons-638421"
	I1105 17:47:35.994408  285958 addons.go:234] Setting addon yakd=true in "addons-638421"
	I1105 17:47:35.994436  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:35.994926  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:35.995201  285958 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-638421"
	I1105 17:47:35.995219  285958 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-638421"
	I1105 17:47:35.995245  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:35.995629  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:35.996250  285958 addons.go:69] Setting cloud-spanner=true in profile "addons-638421"
	I1105 17:47:35.996274  285958 addons.go:234] Setting addon cloud-spanner=true in "addons-638421"
	I1105 17:47:35.996299  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:35.996731  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.003798  285958 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-638421"
	I1105 17:47:36.003868  285958 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-638421"
	I1105 17:47:36.003902  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.004404  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.004774  285958 out.go:177] * Verifying Kubernetes components...
	I1105 17:47:36.007210  285958 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-638421"
	I1105 17:47:36.007246  285958 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-638421"
	I1105 17:47:36.007286  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.012923  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.013129  285958 addons.go:69] Setting registry=true in profile "addons-638421"
	I1105 17:47:36.013178  285958 addons.go:234] Setting addon registry=true in "addons-638421"
	I1105 17:47:36.013229  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.013716  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.021146  285958 addons.go:69] Setting default-storageclass=true in profile "addons-638421"
	I1105 17:47:36.021175  285958 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-638421"
	I1105 17:47:36.021499  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.029011  285958 addons.go:69] Setting storage-provisioner=true in profile "addons-638421"
	I1105 17:47:36.029053  285958 addons.go:234] Setting addon storage-provisioner=true in "addons-638421"
	I1105 17:47:36.029088  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.029552  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.044836  285958 addons.go:69] Setting gcp-auth=true in profile "addons-638421"
	I1105 17:47:36.044869  285958 mustload.go:65] Loading cluster: addons-638421
	I1105 17:47:36.045063  285958 config.go:182] Loaded profile config "addons-638421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 17:47:36.045305  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.063111  285958 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-638421"
	I1105 17:47:36.063194  285958 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-638421"
	I1105 17:47:36.063574  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.063856  285958 addons.go:69] Setting ingress=true in profile "addons-638421"
	I1105 17:47:36.063874  285958 addons.go:234] Setting addon ingress=true in "addons-638421"
	I1105 17:47:36.063910  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.064284  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.092395  285958 addons.go:69] Setting ingress-dns=true in profile "addons-638421"
	I1105 17:47:36.092424  285958 addons.go:234] Setting addon ingress-dns=true in "addons-638421"
	I1105 17:47:36.092473  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.092950  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.105420  285958 addons.go:69] Setting volcano=true in profile "addons-638421"
	I1105 17:47:36.105463  285958 addons.go:234] Setting addon volcano=true in "addons-638421"
	I1105 17:47:36.105500  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.105962  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.110397  285958 addons.go:69] Setting inspektor-gadget=true in profile "addons-638421"
	I1105 17:47:36.110424  285958 addons.go:234] Setting addon inspektor-gadget=true in "addons-638421"
	I1105 17:47:36.110461  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.110912  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.127909  285958 addons.go:69] Setting metrics-server=true in profile "addons-638421"
	I1105 17:47:36.127938  285958 addons.go:234] Setting addon metrics-server=true in "addons-638421"
	I1105 17:47:36.127975  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.128429  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.138630  285958 addons.go:69] Setting volumesnapshots=true in profile "addons-638421"
	I1105 17:47:36.138664  285958 addons.go:234] Setting addon volumesnapshots=true in "addons-638421"
	I1105 17:47:36.138723  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.139193  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.156122  285958 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 17:47:36.163572  285958 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1105 17:47:36.166490  285958 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1105 17:47:36.166513  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1105 17:47:36.166624  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.253238  285958 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1105 17:47:36.253579  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1105 17:47:36.255802  285958 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1105 17:47:36.255983  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1105 17:47:36.256156  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.270952  285958 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1105 17:47:36.271119  285958 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1105 17:47:36.275006  285958 out.go:177]   - Using image docker.io/registry:2.8.3
	I1105 17:47:36.278006  285958 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1105 17:47:36.278648  285958 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1105 17:47:36.278665  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1105 17:47:36.278730  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.281564  285958 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-638421"
	I1105 17:47:36.281607  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.282015  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	W1105 17:47:36.284923  285958 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1105 17:47:36.285029  285958 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1105 17:47:36.285101  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.288804  285958 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 17:47:36.289653  285958 addons.go:234] Setting addon default-storageclass=true in "addons-638421"
	I1105 17:47:36.289682  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.290087  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.290233  285958 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1105 17:47:36.290440  285958 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1105 17:47:36.305885  285958 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1105 17:47:36.313355  285958 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1105 17:47:36.313378  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1105 17:47:36.313438  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.313585  285958 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1105 17:47:36.313749  285958 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1105 17:47:36.313759  285958 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1105 17:47:36.313800  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.318224  285958 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1105 17:47:36.318469  285958 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 17:47:36.318484  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 17:47:36.318536  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.322284  285958 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1105 17:47:36.322305  285958 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1105 17:47:36.322378  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.331771  285958 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1105 17:47:36.331794  285958 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1105 17:47:36.331855  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.346251  285958 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 17:47:36.346276  285958 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 17:47:36.346342  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.350374  285958 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1105 17:47:36.350403  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1105 17:47:36.350465  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.365104  285958 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1105 17:47:36.368741  285958 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1105 17:47:36.370854  285958 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1105 17:47:36.373152  285958 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1105 17:47:36.376920  285958 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1105 17:47:36.376945  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1105 17:47:36.377012  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.377199  285958 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1105 17:47:36.387932  285958 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1105 17:47:36.392425  285958 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1105 17:47:36.421407  285958 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1105 17:47:36.425022  285958 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1105 17:47:36.427264  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.428667  285958 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1105 17:47:36.428691  285958 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1105 17:47:36.428753  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.429250  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.481663  285958 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 17:47:36.481741  285958 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 17:47:36.481837  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.507850  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.523526  285958 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1105 17:47:36.528177  285958 out.go:177]   - Using image docker.io/busybox:stable
	I1105 17:47:36.534271  285958 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1105 17:47:36.534300  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1105 17:47:36.534372  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.536693  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.567122  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.575846  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.584845  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.585612  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.585732  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.595438  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.600689  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	W1105 17:47:36.602331  285958 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1105 17:47:36.602356  285958 retry.go:31] will retry after 308.370699ms: ssh: handshake failed: EOF
	I1105 17:47:36.608800  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.627581  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	W1105 17:47:36.629425  285958 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1105 17:47:36.629451  285958 retry.go:31] will retry after 317.71354ms: ssh: handshake failed: EOF
	I1105 17:47:36.648779  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.816392  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1105 17:47:36.861685  285958 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 17:47:36.883858  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1105 17:47:36.890675  285958 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1105 17:47:36.890701  285958 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1105 17:47:36.968860  285958 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1105 17:47:36.968891  285958 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1105 17:47:36.975902  285958 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1105 17:47:36.975925  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1105 17:47:37.005390  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1105 17:47:37.009797  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 17:47:37.047570  285958 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1105 17:47:37.047646  285958 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1105 17:47:37.057609  285958 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 17:47:37.057635  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1105 17:47:37.090752  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1105 17:47:37.114852  285958 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1105 17:47:37.114878  285958 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1105 17:47:37.127837  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1105 17:47:37.133138  285958 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1105 17:47:37.133161  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1105 17:47:37.137825  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1105 17:47:37.156172  285958 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1105 17:47:37.156195  285958 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1105 17:47:37.180478  285958 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1105 17:47:37.180509  285958 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1105 17:47:37.194979  285958 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 17:47:37.195009  285958 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 17:47:37.279721  285958 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1105 17:47:37.279750  285958 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1105 17:47:37.318028  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1105 17:47:37.321284  285958 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1105 17:47:37.321309  285958 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1105 17:47:37.358544  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 17:47:37.370080  285958 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1105 17:47:37.370107  285958 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1105 17:47:37.405079  285958 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 17:47:37.405107  285958 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 17:47:37.410080  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1105 17:47:37.458380  285958 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1105 17:47:37.458423  285958 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1105 17:47:37.475543  285958 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1105 17:47:37.475575  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1105 17:47:37.533876  285958 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1105 17:47:37.533915  285958 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1105 17:47:37.568722  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 17:47:37.581760  285958 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1105 17:47:37.581804  285958 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1105 17:47:37.674118  285958 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1105 17:47:37.674144  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1105 17:47:37.702915  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1105 17:47:37.712158  285958 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1105 17:47:37.712197  285958 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1105 17:47:37.774962  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1105 17:47:37.815972  285958 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1105 17:47:37.815996  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1105 17:47:37.949834  285958 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1105 17:47:37.949873  285958 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1105 17:47:38.009830  285958 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1105 17:47:38.009856  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1105 17:47:38.123769  285958 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1105 17:47:38.123801  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1105 17:47:38.212324  285958 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1105 17:47:38.212351  285958 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1105 17:47:38.287388  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1105 17:47:38.525738  285958 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.272134647s)
	I1105 17:47:38.525776  285958 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1105 17:47:40.112730  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.296284303s)
	I1105 17:47:40.112790  285958 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.251082635s)
	I1105 17:47:40.113712  285958 node_ready.go:35] waiting up to 6m0s for node "addons-638421" to be "Ready" ...
	I1105 17:47:40.210419  285958 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-638421" context rescaled to 1 replicas
	I1105 17:47:40.858898  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.975003586s)
	I1105 17:47:42.172274  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:47:43.000625  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.995180215s)
	I1105 17:47:43.000813  285958 addons.go:475] Verifying addon ingress=true in "addons-638421"
	I1105 17:47:43.000837  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.872971503s)
	I1105 17:47:43.000929  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.863076106s)
	I1105 17:47:43.000953  285958 addons.go:475] Verifying addon registry=true in "addons-638421"
	I1105 17:47:43.000734  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.99091197s)
	I1105 17:47:43.000787  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.909969776s)
	I1105 17:47:43.001426  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.683362299s)
	I1105 17:47:43.001475  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.642908318s)
	I1105 17:47:43.001525  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.591423852s)
	I1105 17:47:43.001676  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.432923369s)
	I1105 17:47:43.001690  285958 addons.go:475] Verifying addon metrics-server=true in "addons-638421"
	I1105 17:47:43.001731  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.298788469s)
	I1105 17:47:43.003682  285958 out.go:177] * Verifying ingress addon...
	I1105 17:47:43.003778  285958 out.go:177] * Verifying registry addon...
	I1105 17:47:43.003834  285958 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-638421 service yakd-dashboard -n yakd-dashboard
	
	I1105 17:47:43.006467  285958 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1105 17:47:43.008189  285958 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1105 17:47:43.035783  285958 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1105 17:47:43.035812  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1105 17:47:43.055345  285958 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1105 17:47:43.056432  285958 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1105 17:47:43.056452  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:43.116803  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.341796752s)
	W1105 17:47:43.116844  285958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1105 17:47:43.116887  285958 retry.go:31] will retry after 255.002227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1105 17:47:43.315704  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.028263294s)
	I1105 17:47:43.315794  285958 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-638421"
	I1105 17:47:43.320304  285958 out.go:177] * Verifying csi-hostpath-driver addon...
	I1105 17:47:43.323876  285958 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1105 17:47:43.334295  285958 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1105 17:47:43.334365  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:43.372993  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1105 17:47:43.520254  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:43.521595  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:43.827884  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:44.011610  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:44.012295  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:44.327871  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:44.511171  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:44.513108  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:44.617005  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:47:44.827670  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:45.011750  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:45.013127  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:45.328480  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:45.510813  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:45.512540  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:45.832333  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:46.012586  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:46.014902  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:46.049040  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.675987895s)
	I1105 17:47:46.328243  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:46.511888  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:46.512947  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:46.617359  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:47:46.827348  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:47.010583  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:47.012548  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:47.030183  285958 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1105 17:47:47.030269  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:47.047719  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:47.146233  285958 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1105 17:47:47.164339  285958 addons.go:234] Setting addon gcp-auth=true in "addons-638421"
	I1105 17:47:47.164403  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:47.164899  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:47.187618  285958 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1105 17:47:47.187676  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:47.211580  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:47.316058  285958 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1105 17:47:47.324266  285958 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1105 17:47:47.327421  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:47.331960  285958 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1105 17:47:47.331987  285958 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1105 17:47:47.350260  285958 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1105 17:47:47.350283  285958 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1105 17:47:47.368208  285958 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1105 17:47:47.368229  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1105 17:47:47.386115  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1105 17:47:47.512570  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:47.513265  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:47.830470  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:47.928919  285958 addons.go:475] Verifying addon gcp-auth=true in "addons-638421"
	I1105 17:47:47.933449  285958 out.go:177] * Verifying gcp-auth addon...
	I1105 17:47:47.937071  285958 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1105 17:47:47.942134  285958 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1105 17:47:47.942192  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:48.042967  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:48.043819  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:48.327719  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:48.440810  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:48.511006  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:48.511638  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:48.827772  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:48.940336  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:49.010709  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:49.011926  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:49.116709  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:47:49.328079  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:49.440342  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:49.510786  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:49.511999  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:49.827541  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:49.940129  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:50.012396  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:50.012641  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:50.328255  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:50.441154  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:50.511105  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:50.511657  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:50.828086  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:50.940600  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:51.012260  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:51.013072  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:51.117573  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:47:51.327645  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:51.440922  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:51.510825  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:51.511543  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:51.829139  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:51.940734  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:52.011742  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:52.012924  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:52.327912  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:52.440588  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:52.510998  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:52.512219  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:52.827258  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:52.940834  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:53.010790  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:53.011838  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:53.327341  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:53.440646  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:53.511491  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:53.511490  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:53.617584  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:47:53.828195  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:53.940928  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:54.011669  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:54.013357  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:54.327009  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:54.440676  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:54.510334  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:54.511719  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:54.827926  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:54.939937  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:55.010848  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:55.012337  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:55.327608  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:55.440204  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:55.511666  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:55.512968  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:55.828004  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:55.940260  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:56.011260  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:56.011685  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:56.117542  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:47:56.327635  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:56.440325  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:56.510728  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:56.513772  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:56.827882  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:56.940947  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:57.011394  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:57.011648  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:57.327991  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:57.440530  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:57.510492  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:57.512121  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:57.828375  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:57.941729  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:58.012254  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:58.013503  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:58.117642  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:47:58.328560  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:58.441114  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:58.511342  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:58.513019  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:58.827301  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:58.941013  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:59.011204  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:59.011554  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:59.327392  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:59.440632  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:59.510797  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:59.511707  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:59.827761  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:59.940959  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:00.042819  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:00.044680  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:00.118404  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:48:00.327868  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:00.440681  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:00.510721  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:00.512303  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:00.827450  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:00.940713  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:01.011078  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:01.012280  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:01.328043  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:01.440510  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:01.513038  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:01.518656  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:01.828284  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:01.940681  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:02.011217  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:02.012391  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:02.328216  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:02.440649  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:02.512098  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:02.512349  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:02.617551  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:48:02.828777  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:02.941147  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:03.010404  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:03.012802  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:03.327820  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:03.440325  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:03.511311  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:03.512011  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:03.829624  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:03.940803  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:04.011207  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:04.012258  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:04.327904  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:04.442150  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:04.510872  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:04.512687  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:04.826914  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:04.940529  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:05.011332  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:05.012071  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:05.117340  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:48:05.330252  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:05.440541  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:05.512036  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:05.514146  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:05.827774  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:05.940558  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:06.010539  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:06.013067  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:06.328086  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:06.440707  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:06.510264  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:06.512822  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:06.827376  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:06.941418  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:07.011058  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:07.012307  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:07.117828  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:48:07.327059  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:07.440051  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:07.510355  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:07.511671  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:07.827838  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:07.941181  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:08.010965  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:08.012529  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:08.326975  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:08.441048  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:08.510450  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:08.512005  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:08.827555  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:08.941184  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:09.010864  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:09.012156  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:09.328025  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:09.440673  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:09.510272  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:09.511692  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:09.617442  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:48:09.827390  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:09.940801  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:10.011042  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:10.012596  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:10.328301  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:10.440485  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:10.510132  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:10.512648  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:10.827512  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:10.940274  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:11.010716  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:11.011217  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:11.327907  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:11.440874  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:11.513465  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:11.514475  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:11.827818  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:11.940397  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:12.010836  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:12.013090  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:12.116808  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:48:12.327162  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:12.440582  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:12.510508  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:12.511899  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:12.828075  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:12.940072  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:13.011307  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:13.011485  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:13.327697  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:13.440277  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:13.511412  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:13.512013  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:13.827173  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:13.940846  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:14.011342  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:14.012471  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:14.117452  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:48:14.327492  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:14.441135  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:14.510799  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:14.512243  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:14.827869  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:14.940583  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:15.010886  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:15.012979  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:15.327501  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:15.440891  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:15.511130  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:15.511175  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:15.827674  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:15.940807  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:16.010777  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:16.012167  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:16.117644  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:48:16.327727  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:16.440886  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:16.510879  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:16.512150  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:16.827159  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:16.940331  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:17.010883  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:17.012031  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:17.327885  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:17.441214  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:17.510962  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:17.512161  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:17.827922  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:17.940378  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:18.010836  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:18.011541  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:18.328095  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:18.440580  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:18.510990  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:18.511353  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:18.617206  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:48:18.827973  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:18.941364  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:19.011807  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:19.012848  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:19.327683  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:19.441149  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:19.510950  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:19.511991  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:19.828005  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:19.941917  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:20.011735  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:20.013175  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:20.327710  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:20.440685  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:20.510847  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:20.512733  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:20.617474  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:48:20.827808  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:20.941064  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:21.010951  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:21.011831  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:21.327667  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:21.440958  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:21.511595  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:21.512315  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:21.827152  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:21.940868  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:22.010372  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:22.012844  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:22.327560  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:22.441047  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:22.511193  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:22.511996  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:22.634241  285958 node_ready.go:49] node "addons-638421" has status "Ready":"True"
	I1105 17:48:22.634267  285958 node_ready.go:38] duration metric: took 42.520529112s for node "addons-638421" to be "Ready" ...
	I1105 17:48:22.634279  285958 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 17:48:22.657774  285958 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fc54b" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:22.832590  285958 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1105 17:48:22.832639  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:23.069865  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:23.070777  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:23.106850  285958 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1105 17:48:23.106877  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:23.337024  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:23.443489  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:23.544434  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:23.546163  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:23.833966  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:23.940500  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:24.011993  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:24.012934  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:24.164722  285958 pod_ready.go:93] pod "coredns-7c65d6cfc9-fc54b" in "kube-system" namespace has status "Ready":"True"
	I1105 17:48:24.164753  285958 pod_ready.go:82] duration metric: took 1.506952096s for pod "coredns-7c65d6cfc9-fc54b" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.164776  285958 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-638421" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.170006  285958 pod_ready.go:93] pod "etcd-addons-638421" in "kube-system" namespace has status "Ready":"True"
	I1105 17:48:24.170032  285958 pod_ready.go:82] duration metric: took 5.247173ms for pod "etcd-addons-638421" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.170047  285958 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-638421" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.175621  285958 pod_ready.go:93] pod "kube-apiserver-addons-638421" in "kube-system" namespace has status "Ready":"True"
	I1105 17:48:24.175643  285958 pod_ready.go:82] duration metric: took 5.588186ms for pod "kube-apiserver-addons-638421" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.175656  285958 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-638421" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.181204  285958 pod_ready.go:93] pod "kube-controller-manager-addons-638421" in "kube-system" namespace has status "Ready":"True"
	I1105 17:48:24.181225  285958 pod_ready.go:82] duration metric: took 5.560888ms for pod "kube-controller-manager-addons-638421" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.181240  285958 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rjktl" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.218573  285958 pod_ready.go:93] pod "kube-proxy-rjktl" in "kube-system" namespace has status "Ready":"True"
	I1105 17:48:24.218596  285958 pod_ready.go:82] duration metric: took 37.349287ms for pod "kube-proxy-rjktl" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.218609  285958 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-638421" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.329052  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:24.441381  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:24.510564  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:24.512309  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:24.617858  285958 pod_ready.go:93] pod "kube-scheduler-addons-638421" in "kube-system" namespace has status "Ready":"True"
	I1105 17:48:24.617890  285958 pod_ready.go:82] duration metric: took 399.27329ms for pod "kube-scheduler-addons-638421" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.617903  285958 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.828967  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:24.940551  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:25.012060  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:25.012446  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:25.329031  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:25.440958  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:25.511879  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:25.515141  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:25.829751  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:25.941570  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:26.014944  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:26.015772  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:26.330313  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:26.441337  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:26.516239  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:26.522179  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:26.624678  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:26.828264  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:26.940798  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:27.012426  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:27.014296  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:27.332926  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:27.441277  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:27.511561  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:27.512858  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:27.829059  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:27.941679  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:28.011998  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:28.015644  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:28.329782  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:28.441893  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:28.513553  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:28.515812  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:28.625263  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:28.829277  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:28.941310  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:29.013458  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:29.015724  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:29.329275  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:29.441060  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:29.512257  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:29.513393  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:29.829778  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:29.941705  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:30.044735  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:30.046485  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:30.330138  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:30.441666  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:30.514398  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:30.517837  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:30.830025  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:30.941780  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:31.012458  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:31.015026  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:31.124583  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:31.330028  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:31.448295  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:31.512037  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:31.512655  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:31.828360  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:31.940015  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:32.011467  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:32.012685  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:32.329280  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:32.440451  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:32.510758  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:32.513174  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:32.829390  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:32.940717  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:33.011926  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:33.013372  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:33.132890  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:33.330399  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:33.440902  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:33.512648  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:33.514792  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:33.829964  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:33.941161  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:34.014174  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:34.016690  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:34.331103  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:34.441283  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:34.514114  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:34.516630  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:34.829471  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:34.941615  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:35.014143  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:35.015543  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:35.331020  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:35.443332  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:35.514057  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:35.522056  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:35.624569  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:35.830672  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:35.941785  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:36.013341  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:36.015550  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:36.331397  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:36.445796  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:36.512191  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:36.514760  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:36.829942  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:36.940857  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:37.011915  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:37.013293  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:37.329305  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:37.441811  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:37.513006  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:37.514044  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:37.624671  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:37.833082  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:37.942301  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:38.011404  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:38.013727  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:38.328831  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:38.440840  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:38.512715  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:38.513352  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:38.829724  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:38.942235  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:39.013528  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:39.015356  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:39.329635  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:39.441623  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:39.512957  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:39.515737  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:39.625524  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:39.831491  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:39.941118  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:40.015616  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:40.017737  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:40.329963  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:40.440957  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:40.518012  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:40.520694  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:40.829382  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:40.940989  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:41.010637  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:41.014826  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:41.334993  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:41.469161  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:41.570672  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:41.572012  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:41.627249  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:41.829487  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:41.940311  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:42.043652  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:42.044314  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:42.329012  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:42.441455  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:42.510325  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:42.512192  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:42.828813  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:42.940600  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:43.026584  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:43.028721  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:43.328554  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:43.441252  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:43.517514  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:43.518750  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:43.829138  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:43.957803  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:44.013741  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:44.015073  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:44.125777  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:44.329226  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:44.441256  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:44.513020  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:44.514858  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:44.830089  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:44.941004  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:45.011185  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:45.013204  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:45.328627  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:45.440657  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:45.514860  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:45.515795  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:45.829785  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:45.954806  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:46.029756  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:46.030133  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:46.333377  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:46.462710  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:46.511299  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:46.512889  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:46.624398  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:46.828710  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:46.940721  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:47.012135  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:47.013129  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:47.329031  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:47.441531  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:47.512452  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:47.514409  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:47.839907  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:47.940476  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:48.012201  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:48.014457  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:48.329592  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:48.442166  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:48.513561  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:48.514398  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:48.628570  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:48.829527  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:48.940185  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:49.011436  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:49.012187  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:49.329146  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:49.446800  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:49.545327  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:49.545681  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:49.829696  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:49.940824  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:50.012811  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:50.014030  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:50.330479  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:50.441292  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:50.511125  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:50.513491  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:50.829096  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:50.941148  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:51.014969  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:51.017078  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:51.125441  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:51.329804  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:51.441428  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:51.510572  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:51.511909  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:51.828686  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:51.941614  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:52.012352  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:52.013613  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:52.328913  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:52.441002  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:52.511651  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:52.512560  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:52.828764  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:52.940511  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:53.010657  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:53.012374  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:53.330422  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:53.440477  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:53.511233  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:53.512816  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:53.624228  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:53.829520  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:53.941051  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:54.013923  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:54.016929  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:54.329122  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:54.441487  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:54.522254  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:54.524417  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:54.829162  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:54.941529  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:55.016075  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:55.018402  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:55.329913  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:55.441010  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:55.513324  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:55.514369  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:55.625034  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:55.829913  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:55.940398  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:56.012919  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:56.014118  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:56.329407  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:56.441231  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:56.513277  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:56.515801  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:56.828652  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:56.941194  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:57.013327  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:57.014895  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:57.330886  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:57.441235  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:57.511603  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:57.513995  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:57.625752  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:57.830132  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:57.941185  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:58.013635  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:58.015335  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:58.329044  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:58.441291  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:58.512862  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:58.515091  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:58.829446  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:58.941268  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:59.013738  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:59.015319  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:59.330070  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:59.440993  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:59.515627  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:59.516198  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:59.829607  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:59.941235  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:00.030341  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:00.031926  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:00.130848  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:00.329838  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:00.440566  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:00.512718  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:00.514384  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:00.829282  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:00.941435  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:01.012452  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:01.013661  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:01.330137  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:01.440773  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:01.511655  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:01.517422  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:01.830949  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:01.941279  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:02.012467  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:02.015097  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:02.333414  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:02.441811  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:02.513194  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:02.514824  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:02.625962  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:02.829068  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:02.941133  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:03.012010  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:03.014279  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:03.330388  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:03.443420  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:03.511201  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:03.513399  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:03.830112  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:03.940814  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:04.012535  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:04.014753  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:04.329465  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:04.441214  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:04.541933  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:04.543932  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:04.828727  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:04.940818  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:05.011318  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:05.012946  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:05.126597  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:05.328796  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:05.440421  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:05.512227  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:05.513539  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:05.829073  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:05.940177  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:06.011512  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:06.013406  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:06.329560  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:06.440863  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:06.511296  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:06.513093  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:06.829431  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:06.940664  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:07.011311  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:07.013244  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:07.331507  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:07.441043  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:07.513595  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:07.514826  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:07.625143  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:07.830835  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:07.941737  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:08.020798  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:08.022782  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:08.330310  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:08.450055  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:08.511287  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:08.512009  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:08.829433  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:08.940774  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:09.011522  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:09.013225  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:09.330420  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:09.440072  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:09.511855  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:09.516591  285958 kapi.go:107] duration metric: took 1m26.508396466s to wait for kubernetes.io/minikube-addons=registry ...
	I1105 17:49:09.627033  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:09.829714  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:09.940773  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:10.011476  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:10.329961  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:10.441095  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:10.510696  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:10.828013  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:10.940405  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:11.010956  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:11.328962  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:11.443384  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:11.512373  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:11.830137  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:11.941709  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:12.011282  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:12.125262  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:12.341700  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:12.441474  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:12.511526  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:12.842462  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:12.941042  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:13.011362  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:13.330382  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:13.441517  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:13.511927  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:13.829887  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:13.941392  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:14.021656  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:14.126633  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:14.328981  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:14.440251  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:14.510853  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:14.828673  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:14.940747  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:15.011888  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:15.329548  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:15.441165  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:15.511656  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:15.829876  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:15.941932  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:16.012537  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:16.127190  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:16.329522  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:16.441370  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:16.510834  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:16.829174  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:16.942258  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:17.012562  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:17.329307  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:17.441836  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:17.511233  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:17.829563  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:17.942064  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:18.011964  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:18.328670  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:18.441557  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:18.511082  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:18.626188  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:18.829154  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:18.940766  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:19.011557  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:19.330040  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:19.440876  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:19.510988  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:19.830217  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:19.941676  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:20.043189  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:20.328914  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:20.441831  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:20.511075  285958 kapi.go:107] duration metric: took 1m37.504610248s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1105 17:49:20.627838  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:20.834509  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:20.941083  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:21.328962  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:21.477208  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:21.834316  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:21.943517  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:22.328957  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:22.441604  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:22.829454  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:22.940784  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:23.125409  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:23.328810  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:23.441499  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:23.830180  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:23.940645  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:24.330052  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:24.448240  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:24.852432  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:24.941396  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:25.125853  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:25.330361  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:25.441494  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:25.828681  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:25.940867  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:26.329073  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:26.440422  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:26.829454  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:26.940980  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:27.126393  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:27.331898  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:27.441163  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:27.839656  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:27.940502  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:28.329665  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:28.441932  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:28.828532  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:28.941028  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:29.329733  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:29.441536  285958 kapi.go:107] duration metric: took 1m41.504466783s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1105 17:49:29.444196  285958 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-638421 cluster.
	I1105 17:49:29.446837  285958 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1105 17:49:29.449465  285958 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1105 17:49:29.627083  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:29.830150  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:30.329648  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:30.829574  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:31.328954  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:31.828477  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:32.130733  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:32.330699  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:32.829287  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:33.329333  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:33.832213  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:34.329396  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:34.625644  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:34.839454  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:35.328801  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:35.828827  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:36.330234  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:36.632072  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:36.830884  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:37.334121  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:37.830079  285958 kapi.go:107] duration metric: took 1m54.506199764s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1105 17:49:37.833001  285958 out.go:177] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, ingress-dns, inspektor-gadget, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1105 17:49:37.835695  285958 addons.go:510] duration metric: took 2m1.841406106s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner nvidia-device-plugin ingress-dns inspektor-gadget storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1105 17:49:39.124531  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:41.124722  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:43.624811  285958 pod_ready.go:93] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"True"
	I1105 17:49:43.624841  285958 pod_ready.go:82] duration metric: took 1m19.006929434s for pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace to be "Ready" ...
	I1105 17:49:43.624856  285958 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-sms7j" in "kube-system" namespace to be "Ready" ...
	I1105 17:49:43.630303  285958 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-sms7j" in "kube-system" namespace has status "Ready":"True"
	I1105 17:49:43.630331  285958 pod_ready.go:82] duration metric: took 5.466488ms for pod "nvidia-device-plugin-daemonset-sms7j" in "kube-system" namespace to be "Ready" ...
	I1105 17:49:43.630357  285958 pod_ready.go:39] duration metric: took 1m20.996057398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 17:49:43.630373  285958 api_server.go:52] waiting for apiserver process to appear ...
	I1105 17:49:43.630406  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 17:49:43.630471  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 17:49:43.690319  285958 cri.go:89] found id: "0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a"
	I1105 17:49:43.690387  285958 cri.go:89] found id: ""
	I1105 17:49:43.690402  285958 logs.go:282] 1 containers: [0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a]
	I1105 17:49:43.690459  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:43.694572  285958 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 17:49:43.694687  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 17:49:43.736655  285958 cri.go:89] found id: "5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0"
	I1105 17:49:43.736681  285958 cri.go:89] found id: ""
	I1105 17:49:43.736690  285958 logs.go:282] 1 containers: [5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0]
	I1105 17:49:43.736750  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:43.740097  285958 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 17:49:43.740224  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 17:49:43.777683  285958 cri.go:89] found id: "cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f"
	I1105 17:49:43.777705  285958 cri.go:89] found id: ""
	I1105 17:49:43.777713  285958 logs.go:282] 1 containers: [cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f]
	I1105 17:49:43.777767  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:43.781205  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 17:49:43.781276  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 17:49:43.826877  285958 cri.go:89] found id: "c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c"
	I1105 17:49:43.826900  285958 cri.go:89] found id: ""
	I1105 17:49:43.826909  285958 logs.go:282] 1 containers: [c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c]
	I1105 17:49:43.826986  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:43.830523  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 17:49:43.830611  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 17:49:43.875899  285958 cri.go:89] found id: "4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f"
	I1105 17:49:43.875980  285958 cri.go:89] found id: ""
	I1105 17:49:43.876003  285958 logs.go:282] 1 containers: [4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f]
	I1105 17:49:43.876093  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:43.879686  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 17:49:43.879780  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 17:49:43.918382  285958 cri.go:89] found id: "bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0"
	I1105 17:49:43.918412  285958 cri.go:89] found id: ""
	I1105 17:49:43.918426  285958 logs.go:282] 1 containers: [bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0]
	I1105 17:49:43.918489  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:43.921996  285958 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 17:49:43.922068  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 17:49:43.960184  285958 cri.go:89] found id: "1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952"
	I1105 17:49:43.960208  285958 cri.go:89] found id: ""
	I1105 17:49:43.960217  285958 logs.go:282] 1 containers: [1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952]
	I1105 17:49:43.960274  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:43.963882  285958 logs.go:123] Gathering logs for kube-scheduler [c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c] ...
	I1105 17:49:43.963908  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c"
	I1105 17:49:44.027704  285958 logs.go:123] Gathering logs for kube-proxy [4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f] ...
	I1105 17:49:44.027735  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f"
	I1105 17:49:44.076175  285958 logs.go:123] Gathering logs for kindnet [1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952] ...
	I1105 17:49:44.076246  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952"
	I1105 17:49:44.115998  285958 logs.go:123] Gathering logs for kubelet ...
	I1105 17:49:44.116032  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1105 17:49:44.177532  285958 logs.go:138] Found kubelet problem: Nov 05 17:47:36 addons-638421 kubelet[1497]: W1105 17:47:36.418169    1497 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-638421' and this object
	W1105 17:49:44.177768  285958 logs.go:138] Found kubelet problem: Nov 05 17:47:36 addons-638421 kubelet[1497]: E1105 17:47:36.418217    1497 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	W1105 17:49:44.204041  285958 logs.go:138] Found kubelet problem: Nov 05 17:48:22 addons-638421 kubelet[1497]: W1105 17:48:22.583617    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-638421' and this object
	W1105 17:49:44.204282  285958 logs.go:138] Found kubelet problem: Nov 05 17:48:22 addons-638421 kubelet[1497]: E1105 17:48:22.583667    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	I1105 17:49:44.237909  285958 logs.go:123] Gathering logs for dmesg ...
	I1105 17:49:44.237943  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 17:49:44.257082  285958 logs.go:123] Gathering logs for describe nodes ...
	I1105 17:49:44.257111  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 17:49:44.434838  285958 logs.go:123] Gathering logs for coredns [cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f] ...
	I1105 17:49:44.434871  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f"
	I1105 17:49:44.485089  285958 logs.go:123] Gathering logs for container status ...
	I1105 17:49:44.485120  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 17:49:44.535847  285958 logs.go:123] Gathering logs for kube-apiserver [0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a] ...
	I1105 17:49:44.535878  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a"
	I1105 17:49:44.613138  285958 logs.go:123] Gathering logs for etcd [5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0] ...
	I1105 17:49:44.613173  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0"
	I1105 17:49:44.666981  285958 logs.go:123] Gathering logs for kube-controller-manager [bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0] ...
	I1105 17:49:44.667014  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0"
	I1105 17:49:44.746061  285958 logs.go:123] Gathering logs for CRI-O ...
	I1105 17:49:44.746097  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 17:49:44.844511  285958 out.go:358] Setting ErrFile to fd 2...
	I1105 17:49:44.844547  285958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1105 17:49:44.844615  285958 out.go:270] X Problems detected in kubelet:
	W1105 17:49:44.844627  285958 out.go:270]   Nov 05 17:47:36 addons-638421 kubelet[1497]: W1105 17:47:36.418169    1497 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-638421' and this object
	W1105 17:49:44.844638  285958 out.go:270]   Nov 05 17:47:36 addons-638421 kubelet[1497]: E1105 17:47:36.418217    1497 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	W1105 17:49:44.844645  285958 out.go:270]   Nov 05 17:48:22 addons-638421 kubelet[1497]: W1105 17:48:22.583617    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-638421' and this object
	W1105 17:49:44.844652  285958 out.go:270]   Nov 05 17:48:22 addons-638421 kubelet[1497]: E1105 17:48:22.583667    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	I1105 17:49:44.844664  285958 out.go:358] Setting ErrFile to fd 2...
	I1105 17:49:44.844671  285958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:49:54.846654  285958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 17:49:54.863106  285958 api_server.go:72] duration metric: took 2m18.86965892s to wait for apiserver process to appear ...
	I1105 17:49:54.863135  285958 api_server.go:88] waiting for apiserver healthz status ...
	I1105 17:49:54.863174  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 17:49:54.863237  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 17:49:54.900477  285958 cri.go:89] found id: "0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a"
	I1105 17:49:54.900497  285958 cri.go:89] found id: ""
	I1105 17:49:54.900505  285958 logs.go:282] 1 containers: [0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a]
	I1105 17:49:54.900560  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:54.903993  285958 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 17:49:54.904060  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 17:49:54.941184  285958 cri.go:89] found id: "5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0"
	I1105 17:49:54.941208  285958 cri.go:89] found id: ""
	I1105 17:49:54.941217  285958 logs.go:282] 1 containers: [5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0]
	I1105 17:49:54.941272  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:54.944714  285958 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 17:49:54.944788  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 17:49:54.986856  285958 cri.go:89] found id: "cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f"
	I1105 17:49:54.986880  285958 cri.go:89] found id: ""
	I1105 17:49:54.986888  285958 logs.go:282] 1 containers: [cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f]
	I1105 17:49:54.986947  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:54.990528  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 17:49:54.990606  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 17:49:55.030564  285958 cri.go:89] found id: "c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c"
	I1105 17:49:55.030589  285958 cri.go:89] found id: ""
	I1105 17:49:55.030643  285958 logs.go:282] 1 containers: [c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c]
	I1105 17:49:55.030720  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:55.034564  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 17:49:55.034654  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 17:49:55.075092  285958 cri.go:89] found id: "4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f"
	I1105 17:49:55.075117  285958 cri.go:89] found id: ""
	I1105 17:49:55.075126  285958 logs.go:282] 1 containers: [4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f]
	I1105 17:49:55.075184  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:55.078865  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 17:49:55.078940  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 17:49:55.118682  285958 cri.go:89] found id: "bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0"
	I1105 17:49:55.118705  285958 cri.go:89] found id: ""
	I1105 17:49:55.118714  285958 logs.go:282] 1 containers: [bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0]
	I1105 17:49:55.118769  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:55.122390  285958 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 17:49:55.122468  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 17:49:55.159824  285958 cri.go:89] found id: "1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952"
	I1105 17:49:55.159847  285958 cri.go:89] found id: ""
	I1105 17:49:55.159856  285958 logs.go:282] 1 containers: [1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952]
	I1105 17:49:55.159915  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:55.163457  285958 logs.go:123] Gathering logs for dmesg ...
	I1105 17:49:55.163483  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 17:49:55.179445  285958 logs.go:123] Gathering logs for etcd [5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0] ...
	I1105 17:49:55.179473  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0"
	I1105 17:49:55.234825  285958 logs.go:123] Gathering logs for kube-proxy [4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f] ...
	I1105 17:49:55.234859  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f"
	I1105 17:49:55.276024  285958 logs.go:123] Gathering logs for kindnet [1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952] ...
	I1105 17:49:55.276050  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952"
	I1105 17:49:55.320748  285958 logs.go:123] Gathering logs for CRI-O ...
	I1105 17:49:55.320777  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 17:49:55.415558  285958 logs.go:123] Gathering logs for container status ...
	I1105 17:49:55.415597  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 17:49:55.471921  285958 logs.go:123] Gathering logs for kubelet ...
	I1105 17:49:55.471959  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1105 17:49:55.531460  285958 logs.go:138] Found kubelet problem: Nov 05 17:47:36 addons-638421 kubelet[1497]: W1105 17:47:36.418169    1497 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-638421' and this object
	W1105 17:49:55.531692  285958 logs.go:138] Found kubelet problem: Nov 05 17:47:36 addons-638421 kubelet[1497]: E1105 17:47:36.418217    1497 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	W1105 17:49:55.557830  285958 logs.go:138] Found kubelet problem: Nov 05 17:48:22 addons-638421 kubelet[1497]: W1105 17:48:22.583617    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-638421' and this object
	W1105 17:49:55.558064  285958 logs.go:138] Found kubelet problem: Nov 05 17:48:22 addons-638421 kubelet[1497]: E1105 17:48:22.583667    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	I1105 17:49:55.592228  285958 logs.go:123] Gathering logs for describe nodes ...
	I1105 17:49:55.592254  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 17:49:55.723278  285958 logs.go:123] Gathering logs for kube-apiserver [0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a] ...
	I1105 17:49:55.723315  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a"
	I1105 17:49:55.791344  285958 logs.go:123] Gathering logs for coredns [cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f] ...
	I1105 17:49:55.791376  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f"
	I1105 17:49:55.832101  285958 logs.go:123] Gathering logs for kube-scheduler [c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c] ...
	I1105 17:49:55.832130  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c"
	I1105 17:49:55.874288  285958 logs.go:123] Gathering logs for kube-controller-manager [bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0] ...
	I1105 17:49:55.874323  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0"
	I1105 17:49:55.942310  285958 out.go:358] Setting ErrFile to fd 2...
	I1105 17:49:55.942342  285958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1105 17:49:55.942407  285958 out.go:270] X Problems detected in kubelet:
	W1105 17:49:55.942418  285958 out.go:270]   Nov 05 17:47:36 addons-638421 kubelet[1497]: W1105 17:47:36.418169    1497 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-638421' and this object
	W1105 17:49:55.942427  285958 out.go:270]   Nov 05 17:47:36 addons-638421 kubelet[1497]: E1105 17:47:36.418217    1497 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	W1105 17:49:55.942443  285958 out.go:270]   Nov 05 17:48:22 addons-638421 kubelet[1497]: W1105 17:48:22.583617    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-638421' and this object
	W1105 17:49:55.942452  285958 out.go:270]   Nov 05 17:48:22 addons-638421 kubelet[1497]: E1105 17:48:22.583667    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	I1105 17:49:55.942465  285958 out.go:358] Setting ErrFile to fd 2...
	I1105 17:49:55.942472  285958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:50:05.943997  285958 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 17:50:05.954316  285958 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1105 17:50:05.955359  285958 api_server.go:141] control plane version: v1.31.2
	I1105 17:50:05.955389  285958 api_server.go:131] duration metric: took 11.092246489s to wait for apiserver health ...
	I1105 17:50:05.955399  285958 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 17:50:05.955422  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 17:50:05.955486  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 17:50:05.995851  285958 cri.go:89] found id: "0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a"
	I1105 17:50:05.995873  285958 cri.go:89] found id: ""
	I1105 17:50:05.995882  285958 logs.go:282] 1 containers: [0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a]
	I1105 17:50:05.995938  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:50:05.999482  285958 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 17:50:05.999567  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 17:50:06.038315  285958 cri.go:89] found id: "5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0"
	I1105 17:50:06.038338  285958 cri.go:89] found id: ""
	I1105 17:50:06.038347  285958 logs.go:282] 1 containers: [5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0]
	I1105 17:50:06.038404  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:50:06.041930  285958 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 17:50:06.042048  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 17:50:06.085559  285958 cri.go:89] found id: "cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f"
	I1105 17:50:06.085582  285958 cri.go:89] found id: ""
	I1105 17:50:06.085591  285958 logs.go:282] 1 containers: [cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f]
	I1105 17:50:06.085649  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:50:06.089348  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 17:50:06.089419  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 17:50:06.128403  285958 cri.go:89] found id: "c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c"
	I1105 17:50:06.128428  285958 cri.go:89] found id: ""
	I1105 17:50:06.128436  285958 logs.go:282] 1 containers: [c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c]
	I1105 17:50:06.128501  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:50:06.132326  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 17:50:06.132406  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 17:50:06.175706  285958 cri.go:89] found id: "4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f"
	I1105 17:50:06.175729  285958 cri.go:89] found id: ""
	I1105 17:50:06.175737  285958 logs.go:282] 1 containers: [4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f]
	I1105 17:50:06.175793  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:50:06.179314  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 17:50:06.179388  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 17:50:06.222170  285958 cri.go:89] found id: "bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0"
	I1105 17:50:06.222193  285958 cri.go:89] found id: ""
	I1105 17:50:06.222202  285958 logs.go:282] 1 containers: [bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0]
	I1105 17:50:06.222258  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:50:06.225638  285958 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 17:50:06.225708  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 17:50:06.269890  285958 cri.go:89] found id: "1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952"
	I1105 17:50:06.269912  285958 cri.go:89] found id: ""
	I1105 17:50:06.269920  285958 logs.go:282] 1 containers: [1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952]
	I1105 17:50:06.269978  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:50:06.273628  285958 logs.go:123] Gathering logs for coredns [cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f] ...
	I1105 17:50:06.273657  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f"
	I1105 17:50:06.312988  285958 logs.go:123] Gathering logs for kube-scheduler [c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c] ...
	I1105 17:50:06.313022  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c"
	I1105 17:50:06.364918  285958 logs.go:123] Gathering logs for kube-controller-manager [bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0] ...
	I1105 17:50:06.364951  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0"
	I1105 17:50:06.434251  285958 logs.go:123] Gathering logs for kindnet [1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952] ...
	I1105 17:50:06.434291  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952"
	I1105 17:50:06.473656  285958 logs.go:123] Gathering logs for CRI-O ...
	I1105 17:50:06.473685  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 17:50:06.572390  285958 logs.go:123] Gathering logs for container status ...
	I1105 17:50:06.572427  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 17:50:06.640943  285958 logs.go:123] Gathering logs for describe nodes ...
	I1105 17:50:06.640977  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 17:50:06.797333  285958 logs.go:123] Gathering logs for kube-apiserver [0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a] ...
	I1105 17:50:06.797369  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a"
	I1105 17:50:06.850355  285958 logs.go:123] Gathering logs for etcd [5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0] ...
	I1105 17:50:06.850387  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0"
	I1105 17:50:06.910877  285958 logs.go:123] Gathering logs for kube-proxy [4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f] ...
	I1105 17:50:06.910910  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f"
	I1105 17:50:06.949712  285958 logs.go:123] Gathering logs for kubelet ...
	I1105 17:50:06.949741  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1105 17:50:07.003517  285958 logs.go:138] Found kubelet problem: Nov 05 17:47:36 addons-638421 kubelet[1497]: W1105 17:47:36.418169    1497 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-638421' and this object
	W1105 17:50:07.003756  285958 logs.go:138] Found kubelet problem: Nov 05 17:47:36 addons-638421 kubelet[1497]: E1105 17:47:36.418217    1497 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	W1105 17:50:07.030068  285958 logs.go:138] Found kubelet problem: Nov 05 17:48:22 addons-638421 kubelet[1497]: W1105 17:48:22.583617    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-638421' and this object
	W1105 17:50:07.030310  285958 logs.go:138] Found kubelet problem: Nov 05 17:48:22 addons-638421 kubelet[1497]: E1105 17:48:22.583667    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	I1105 17:50:07.065433  285958 logs.go:123] Gathering logs for dmesg ...
	I1105 17:50:07.065462  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 17:50:07.082079  285958 out.go:358] Setting ErrFile to fd 2...
	I1105 17:50:07.082103  285958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1105 17:50:07.082154  285958 out.go:270] X Problems detected in kubelet:
	W1105 17:50:07.082171  285958 out.go:270]   Nov 05 17:47:36 addons-638421 kubelet[1497]: W1105 17:47:36.418169    1497 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-638421' and this object
	W1105 17:50:07.082178  285958 out.go:270]   Nov 05 17:47:36 addons-638421 kubelet[1497]: E1105 17:47:36.418217    1497 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	W1105 17:50:07.082186  285958 out.go:270]   Nov 05 17:48:22 addons-638421 kubelet[1497]: W1105 17:48:22.583617    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-638421' and this object
	W1105 17:50:07.082202  285958 out.go:270]   Nov 05 17:48:22 addons-638421 kubelet[1497]: E1105 17:48:22.583667    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	I1105 17:50:07.082208  285958 out.go:358] Setting ErrFile to fd 2...
	I1105 17:50:07.082214  285958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:50:17.093804  285958 system_pods.go:59] 18 kube-system pods found
	I1105 17:50:17.093852  285958 system_pods.go:61] "coredns-7c65d6cfc9-fc54b" [2ddf511d-7116-4d85-9a47-69451cf3567b] Running
	I1105 17:50:17.093859  285958 system_pods.go:61] "csi-hostpath-attacher-0" [0a836e59-ab7e-4299-9fa1-58898352e6e1] Running
	I1105 17:50:17.093864  285958 system_pods.go:61] "csi-hostpath-resizer-0" [9e2fd9dc-0b28-4b24-af35-169834609626] Running
	I1105 17:50:17.093868  285958 system_pods.go:61] "csi-hostpathplugin-spl7f" [302e097c-b1e5-4a6e-8974-ed54ac3622a7] Running
	I1105 17:50:17.093874  285958 system_pods.go:61] "etcd-addons-638421" [a4272f93-3c10-41e4-aa9c-d92d18e93912] Running
	I1105 17:50:17.093879  285958 system_pods.go:61] "kindnet-mgcb7" [edefae1d-4f88-4e94-a3f8-881d352214d7] Running
	I1105 17:50:17.093884  285958 system_pods.go:61] "kube-apiserver-addons-638421" [1823851c-e4ac-418b-806e-ec449280ed27] Running
	I1105 17:50:17.093922  285958 system_pods.go:61] "kube-controller-manager-addons-638421" [4cc07926-753f-4483-ac98-15581396a5bb] Running
	I1105 17:50:17.093934  285958 system_pods.go:61] "kube-ingress-dns-minikube" [347ca6ec-8068-4243-80fc-ec6e6a0eeb64] Running
	I1105 17:50:17.093938  285958 system_pods.go:61] "kube-proxy-rjktl" [d984a2cc-7426-4044-8f13-9082c887bda6] Running
	I1105 17:50:17.093942  285958 system_pods.go:61] "kube-scheduler-addons-638421" [1c971a68-3594-46ce-858f-59234800648b] Running
	I1105 17:50:17.093946  285958 system_pods.go:61] "metrics-server-84c5f94fbc-jnqlj" [d43aacca-7261-4530-9a58-1456060cb884] Running
	I1105 17:50:17.093949  285958 system_pods.go:61] "nvidia-device-plugin-daemonset-sms7j" [618e6ceb-8422-465e-9951-05b2b10ce4b0] Running
	I1105 17:50:17.093954  285958 system_pods.go:61] "registry-66c9cd494c-xl46f" [fc8d5d2f-faa3-4f66-b3c1-dac5435a86e5] Running
	I1105 17:50:17.093962  285958 system_pods.go:61] "registry-proxy-2jjl8" [c9892084-3bb9-41d8-b4e5-856524765e94] Running
	I1105 17:50:17.093966  285958 system_pods.go:61] "snapshot-controller-56fcc65765-4tgkv" [e740a30d-66d7-484d-ab45-50d3d0206cfc] Running
	I1105 17:50:17.093970  285958 system_pods.go:61] "snapshot-controller-56fcc65765-ljxfj" [2d3f31f4-59d7-4584-ac5d-6fe0246e99fa] Running
	I1105 17:50:17.093974  285958 system_pods.go:61] "storage-provisioner" [258ce47e-4fa4-4230-9eef-22ee33056db8] Running
	I1105 17:50:17.093980  285958 system_pods.go:74] duration metric: took 11.138573679s to wait for pod list to return data ...
	I1105 17:50:17.093992  285958 default_sa.go:34] waiting for default service account to be created ...
	I1105 17:50:17.096627  285958 default_sa.go:45] found service account: "default"
	I1105 17:50:17.096655  285958 default_sa.go:55] duration metric: took 2.656727ms for default service account to be created ...
	I1105 17:50:17.096664  285958 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 17:50:17.107083  285958 system_pods.go:86] 18 kube-system pods found
	I1105 17:50:17.107116  285958 system_pods.go:89] "coredns-7c65d6cfc9-fc54b" [2ddf511d-7116-4d85-9a47-69451cf3567b] Running
	I1105 17:50:17.107125  285958 system_pods.go:89] "csi-hostpath-attacher-0" [0a836e59-ab7e-4299-9fa1-58898352e6e1] Running
	I1105 17:50:17.107130  285958 system_pods.go:89] "csi-hostpath-resizer-0" [9e2fd9dc-0b28-4b24-af35-169834609626] Running
	I1105 17:50:17.107134  285958 system_pods.go:89] "csi-hostpathplugin-spl7f" [302e097c-b1e5-4a6e-8974-ed54ac3622a7] Running
	I1105 17:50:17.107140  285958 system_pods.go:89] "etcd-addons-638421" [a4272f93-3c10-41e4-aa9c-d92d18e93912] Running
	I1105 17:50:17.107144  285958 system_pods.go:89] "kindnet-mgcb7" [edefae1d-4f88-4e94-a3f8-881d352214d7] Running
	I1105 17:50:17.107149  285958 system_pods.go:89] "kube-apiserver-addons-638421" [1823851c-e4ac-418b-806e-ec449280ed27] Running
	I1105 17:50:17.107153  285958 system_pods.go:89] "kube-controller-manager-addons-638421" [4cc07926-753f-4483-ac98-15581396a5bb] Running
	I1105 17:50:17.107158  285958 system_pods.go:89] "kube-ingress-dns-minikube" [347ca6ec-8068-4243-80fc-ec6e6a0eeb64] Running
	I1105 17:50:17.107164  285958 system_pods.go:89] "kube-proxy-rjktl" [d984a2cc-7426-4044-8f13-9082c887bda6] Running
	I1105 17:50:17.107168  285958 system_pods.go:89] "kube-scheduler-addons-638421" [1c971a68-3594-46ce-858f-59234800648b] Running
	I1105 17:50:17.107173  285958 system_pods.go:89] "metrics-server-84c5f94fbc-jnqlj" [d43aacca-7261-4530-9a58-1456060cb884] Running
	I1105 17:50:17.107181  285958 system_pods.go:89] "nvidia-device-plugin-daemonset-sms7j" [618e6ceb-8422-465e-9951-05b2b10ce4b0] Running
	I1105 17:50:17.107188  285958 system_pods.go:89] "registry-66c9cd494c-xl46f" [fc8d5d2f-faa3-4f66-b3c1-dac5435a86e5] Running
	I1105 17:50:17.107201  285958 system_pods.go:89] "registry-proxy-2jjl8" [c9892084-3bb9-41d8-b4e5-856524765e94] Running
	I1105 17:50:17.107205  285958 system_pods.go:89] "snapshot-controller-56fcc65765-4tgkv" [e740a30d-66d7-484d-ab45-50d3d0206cfc] Running
	I1105 17:50:17.107210  285958 system_pods.go:89] "snapshot-controller-56fcc65765-ljxfj" [2d3f31f4-59d7-4584-ac5d-6fe0246e99fa] Running
	I1105 17:50:17.107214  285958 system_pods.go:89] "storage-provisioner" [258ce47e-4fa4-4230-9eef-22ee33056db8] Running
	I1105 17:50:17.107224  285958 system_pods.go:126] duration metric: took 10.55376ms to wait for k8s-apps to be running ...
	I1105 17:50:17.107237  285958 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 17:50:17.107297  285958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 17:50:17.118986  285958 system_svc.go:56] duration metric: took 11.739161ms WaitForService to wait for kubelet
	I1105 17:50:17.119017  285958 kubeadm.go:582] duration metric: took 2m41.125574834s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 17:50:17.119038  285958 node_conditions.go:102] verifying NodePressure condition ...
	I1105 17:50:17.122991  285958 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1105 17:50:17.123029  285958 node_conditions.go:123] node cpu capacity is 2
	I1105 17:50:17.123051  285958 node_conditions.go:105] duration metric: took 3.984266ms to run NodePressure ...
	I1105 17:50:17.123065  285958 start.go:241] waiting for startup goroutines ...
	I1105 17:50:17.123072  285958 start.go:246] waiting for cluster config update ...
	I1105 17:50:17.123095  285958 start.go:255] writing updated cluster config ...
	I1105 17:50:17.123411  285958 ssh_runner.go:195] Run: rm -f paused
	I1105 17:50:17.477322  285958 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 17:50:17.481893  285958 out.go:177] * Done! kubectl is now configured to use "addons-638421" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 05 17:52:31 addons-638421 crio[965]: time="2024-11-05 17:52:31.098845924Z" level=info msg="Removed pod sandbox: 7c4931178926fc08cf81d7832c862c3e218c19ed054de933d552ff32e4e9fc9b" id=9024d412-71dc-4576-8a57-ce02563004ba name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 05 17:54:08 addons-638421 crio[965]: time="2024-11-05 17:54:08.306196455Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-vt9rn/POD" id=2cf327f7-08f5-48b3-a1e7-b502548d7630 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 05 17:54:08 addons-638421 crio[965]: time="2024-11-05 17:54:08.306257879Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 05 17:54:08 addons-638421 crio[965]: time="2024-11-05 17:54:08.356826021Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-vt9rn Namespace:default ID:7c43da301aca8d61a222be1c8413901dd0ed7ef7a25055bbd2088beee4970fa5 UID:d63a7083-8468-4dbb-a1aa-2abcbe2b8503 NetNS:/var/run/netns/8d4041a1-b04d-464c-9faa-7c74933c1100 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 05 17:54:08 addons-638421 crio[965]: time="2024-11-05 17:54:08.356876958Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-vt9rn to CNI network \"kindnet\" (type=ptp)"
	Nov 05 17:54:08 addons-638421 crio[965]: time="2024-11-05 17:54:08.375748689Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-vt9rn Namespace:default ID:7c43da301aca8d61a222be1c8413901dd0ed7ef7a25055bbd2088beee4970fa5 UID:d63a7083-8468-4dbb-a1aa-2abcbe2b8503 NetNS:/var/run/netns/8d4041a1-b04d-464c-9faa-7c74933c1100 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 05 17:54:08 addons-638421 crio[965]: time="2024-11-05 17:54:08.375901091Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-vt9rn for CNI network kindnet (type=ptp)"
	Nov 05 17:54:08 addons-638421 crio[965]: time="2024-11-05 17:54:08.378550326Z" level=info msg="Ran pod sandbox 7c43da301aca8d61a222be1c8413901dd0ed7ef7a25055bbd2088beee4970fa5 with infra container: default/hello-world-app-55bf9c44b4-vt9rn/POD" id=2cf327f7-08f5-48b3-a1e7-b502548d7630 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 05 17:54:08 addons-638421 crio[965]: time="2024-11-05 17:54:08.379814924Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e1b7f076-b97c-49dc-a600-6da6aa340939 name=/runtime.v1.ImageService/ImageStatus
	Nov 05 17:54:08 addons-638421 crio[965]: time="2024-11-05 17:54:08.380028281Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=e1b7f076-b97c-49dc-a600-6da6aa340939 name=/runtime.v1.ImageService/ImageStatus
	Nov 05 17:54:08 addons-638421 crio[965]: time="2024-11-05 17:54:08.380833187Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=ce04a18e-28e1-4806-a062-8bbdce96fe40 name=/runtime.v1.ImageService/PullImage
	Nov 05 17:54:08 addons-638421 crio[965]: time="2024-11-05 17:54:08.384078730Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 05 17:54:08 addons-638421 crio[965]: time="2024-11-05 17:54:08.726469951Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 05 17:54:09 addons-638421 crio[965]: time="2024-11-05 17:54:09.636790510Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=ce04a18e-28e1-4806-a062-8bbdce96fe40 name=/runtime.v1.ImageService/PullImage
	Nov 05 17:54:09 addons-638421 crio[965]: time="2024-11-05 17:54:09.637714521Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=8577620f-4ad8-4601-9b09-e2a7bb427685 name=/runtime.v1.ImageService/ImageStatus
	Nov 05 17:54:09 addons-638421 crio[965]: time="2024-11-05 17:54:09.638635767Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8577620f-4ad8-4601-9b09-e2a7bb427685 name=/runtime.v1.ImageService/ImageStatus
	Nov 05 17:54:09 addons-638421 crio[965]: time="2024-11-05 17:54:09.639469817Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=849cab47-6bf3-4115-867a-2de98199fbce name=/runtime.v1.ImageService/ImageStatus
	Nov 05 17:54:09 addons-638421 crio[965]: time="2024-11-05 17:54:09.640328081Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=849cab47-6bf3-4115-867a-2de98199fbce name=/runtime.v1.ImageService/ImageStatus
	Nov 05 17:54:09 addons-638421 crio[965]: time="2024-11-05 17:54:09.644926385Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-vt9rn/hello-world-app" id=5ed9820a-7835-4934-9bb1-546a2fcfbefa name=/runtime.v1.RuntimeService/CreateContainer
	Nov 05 17:54:09 addons-638421 crio[965]: time="2024-11-05 17:54:09.645158680Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 05 17:54:09 addons-638421 crio[965]: time="2024-11-05 17:54:09.673962583Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/89de74513d954bd18024441068afae703225f1590e8feedb28f838c7a08c9a78/merged/etc/passwd: no such file or directory"
	Nov 05 17:54:09 addons-638421 crio[965]: time="2024-11-05 17:54:09.674171797Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/89de74513d954bd18024441068afae703225f1590e8feedb28f838c7a08c9a78/merged/etc/group: no such file or directory"
	Nov 05 17:54:09 addons-638421 crio[965]: time="2024-11-05 17:54:09.745929902Z" level=info msg="Created container bb1c82d838bced5e60bcfaa1608495494f260d86a37c1d21d2ec1773540b19dc: default/hello-world-app-55bf9c44b4-vt9rn/hello-world-app" id=5ed9820a-7835-4934-9bb1-546a2fcfbefa name=/runtime.v1.RuntimeService/CreateContainer
	Nov 05 17:54:09 addons-638421 crio[965]: time="2024-11-05 17:54:09.748823897Z" level=info msg="Starting container: bb1c82d838bced5e60bcfaa1608495494f260d86a37c1d21d2ec1773540b19dc" id=e320f3c1-8b6b-4464-8dc8-72549d89984c name=/runtime.v1.RuntimeService/StartContainer
	Nov 05 17:54:09 addons-638421 crio[965]: time="2024-11-05 17:54:09.759564560Z" level=info msg="Started container" PID=8438 containerID=bb1c82d838bced5e60bcfaa1608495494f260d86a37c1d21d2ec1773540b19dc description=default/hello-world-app-55bf9c44b4-vt9rn/hello-world-app id=e320f3c1-8b6b-4464-8dc8-72549d89984c name=/runtime.v1.RuntimeService/StartContainer sandboxID=7c43da301aca8d61a222be1c8413901dd0ed7ef7a25055bbd2088beee4970fa5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                       ATTEMPT             POD ID              POD
	bb1c82d838bce       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app            0                   7c43da301aca8       hello-world-app-55bf9c44b4-vt9rn
	ae864e6292856       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago            Running             nginx                      0                   2b21644876658       nginx
	38801e9ceca37       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                    0                   680bf55362991       busybox
	ce60b2c281363       registry.k8s.io/ingress-nginx/controller@sha256:787a5408fa511266888b2e765f9666bee67d9bf2518a6b7cfd4ab6cc01c22eee             4 minutes ago            Running             controller                 0                   c0089c40a995c       ingress-nginx-controller-5f85ff4588-sn56p
	cafda1e68ab38       gcr.io/cloud-spanner-emulator/emulator@sha256:7cf2be1ac85c39a0c5b34185b6c3d0ea479269f5c8ecc785713308f93194ca27               4 minutes ago            Running             cloud-spanner-emulator     0                   d2a2cdf851181       cloud-spanner-emulator-dc5db94f4-hdz77
	94175e58ab2c8       nvcr.io/nvidia/k8s-device-plugin@sha256:7089559ce6153018806857f5049085bae15b3bf6f1c8bd19d8b12f707d087dea                     5 minutes ago            Running             nvidia-device-plugin-ctr   0                   1922f08725f9b       nvidia-device-plugin-daemonset-sms7j
	aa80a534c5ee9       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                              5 minutes ago            Running             yakd                       0                   296f24038e042       yakd-dashboard-67d98fc6b-xsl7x
	6ae401b0b7512       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   5 minutes ago            Exited              patch                      0                   979a6f1215438       ingress-nginx-admission-patch-kd86x
	d782815988cd2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   5 minutes ago            Exited              create                     0                   808a788fe7ad3       ingress-nginx-admission-create-vtch6
	3f6b6439b98f1       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        5 minutes ago            Running             metrics-server             0                   774723ef7a311       metrics-server-84c5f94fbc-jnqlj
	3b9991fb54cd6       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             5 minutes ago            Running             local-path-provisioner     0                   1e1e44524e367       local-path-provisioner-86d989889c-fw752
	febf0973405a6       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             5 minutes ago            Running             minikube-ingress-dns       0                   61012d7b61a3b       kube-ingress-dns-minikube
	928fd37a67be8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago            Running             storage-provisioner        0                   0f074abc0f7ea       storage-provisioner
	cb903a97940ff       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             5 minutes ago            Running             coredns                    0                   8a5b67615ea90       coredns-7c65d6cfc9-fc54b
	1fd0ca35d5df4       docker.io/kindest/kindnetd@sha256:96156439ac8537499e45fedf68a7cb80f0fbafd77fc4d7a5204d3151cf412450                           6 minutes ago            Running             kindnet-cni                0                   5487a21b0441a       kindnet-mgcb7
	4c604d9201f70       021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba                                                             6 minutes ago            Running             kube-proxy                 0                   8efea88ac21a5       kube-proxy-rjktl
	bab636744f5f7       9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba                                                             6 minutes ago            Running             kube-controller-manager    0                   908191a37142d       kube-controller-manager-addons-638421
	0b5b17e046037       f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270                                                             6 minutes ago            Running             kube-apiserver             0                   948e6fc4819ec       kube-apiserver-addons-638421
	c43ffe8529476       d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a                                                             6 minutes ago            Running             kube-scheduler             0                   338974ac07cf8       kube-scheduler-addons-638421
	5a11a95bd109a       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             6 minutes ago            Running             etcd                       0                   cc34723a8caff       etcd-addons-638421
	
	
	==> coredns [cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f] <==
	[INFO] 10.244.0.10:53704 - 9035 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00375787s
	[INFO] 10.244.0.10:53704 - 43019 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000209977s
	[INFO] 10.244.0.10:53704 - 30603 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000124414s
	[INFO] 10.244.0.10:41881 - 28451 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000118383s
	[INFO] 10.244.0.10:41881 - 28221 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000089756s
	[INFO] 10.244.0.10:57273 - 41304 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00005463s
	[INFO] 10.244.0.10:57273 - 41123 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000075528s
	[INFO] 10.244.0.10:48025 - 8258 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085579s
	[INFO] 10.244.0.10:48025 - 8703 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000135614s
	[INFO] 10.244.0.10:43580 - 16471 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001763296s
	[INFO] 10.244.0.10:43580 - 16291 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002610721s
	[INFO] 10.244.0.10:38533 - 43788 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000076192s
	[INFO] 10.244.0.10:38533 - 44192 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000054982s
	[INFO] 10.244.0.21:42956 - 63222 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000196103s
	[INFO] 10.244.0.21:39277 - 29415 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000162765s
	[INFO] 10.244.0.21:47863 - 52573 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000267601s
	[INFO] 10.244.0.21:43700 - 17211 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00036141s
	[INFO] 10.244.0.21:44841 - 200 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120032s
	[INFO] 10.244.0.21:48618 - 39026 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000066617s
	[INFO] 10.244.0.21:43133 - 26273 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003582469s
	[INFO] 10.244.0.21:34160 - 53974 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003710969s
	[INFO] 10.244.0.21:53435 - 42094 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002472957s
	[INFO] 10.244.0.21:37266 - 31350 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002378639s
	[INFO] 10.244.0.24:33985 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000214646s
	[INFO] 10.244.0.24:33854 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00016379s
	
	
	==> describe nodes <==
	Name:               addons-638421
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-638421
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=addons-638421
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T17_47_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-638421
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 17:47:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-638421
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 17:54:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 17:52:06 +0000   Tue, 05 Nov 2024 17:47:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 17:52:06 +0000   Tue, 05 Nov 2024 17:47:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 17:52:06 +0000   Tue, 05 Nov 2024 17:47:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 17:52:06 +0000   Tue, 05 Nov 2024 17:48:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-638421
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 1aa8bab8a0b94fdea88af9bbdf5cb344
	  System UUID:                7313307f-ed44-4709-8a3d-c1f8b80a1e22
	  Boot ID:                    308934a7-38b0-4c4f-b876-76c17d9b7ecd
	  Kernel Version:             5.15.0-1071-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  default                     cloud-spanner-emulator-dc5db94f4-hdz77       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	  default                     hello-world-app-55bf9c44b4-vt9rn             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-sn56p    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m28s
	  kube-system                 coredns-7c65d6cfc9-fc54b                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m34s
	  kube-system                 etcd-addons-638421                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m40s
	  kube-system                 kindnet-mgcb7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m34s
	  kube-system                 kube-apiserver-addons-638421                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 kube-controller-manager-addons-638421        200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 kube-proxy-rjktl                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-scheduler-addons-638421                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 metrics-server-84c5f94fbc-jnqlj              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m29s
	  kube-system                 nvidia-device-plugin-daemonset-sms7j         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m48s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	  local-path-storage          local-path-provisioner-86d989889c-fw752      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-xsl7x               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     6m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m28s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  6m47s (x8 over 6m47s)  kubelet          Node addons-638421 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m47s (x8 over 6m47s)  kubelet          Node addons-638421 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m47s (x7 over 6m47s)  kubelet          Node addons-638421 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m40s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m40s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m40s                  kubelet          Node addons-638421 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m40s                  kubelet          Node addons-638421 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m40s                  kubelet          Node addons-638421 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m35s                  node-controller  Node addons-638421 event: Registered Node addons-638421 in Controller
	  Normal   NodeReady                5m48s                  kubelet          Node addons-638421 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 5 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014171] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.476378] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025481] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.031094] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.017133] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.607383] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.934599] kauditd_printk_skb: 34 callbacks suppressed
	[Nov 5 16:44] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 5 17:18] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0] <==
	{"level":"warn","ts":"2024-11-05T17:47:37.665114Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.509827ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128033044704398431 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-fc54b\" mod_revision:363 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-fc54b\" value_size:3919 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-fc54b\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-11-05T17:47:37.665302Z","caller":"traceutil/trace.go:171","msg":"trace[1292105864] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"195.115986ms","start":"2024-11-05T17:47:37.470174Z","end":"2024-11-05T17:47:37.665290Z","steps":["trace[1292105864] 'process raft request'  (duration: 195.031187ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:47:37.690663Z","caller":"traceutil/trace.go:171","msg":"trace[1320277466] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"288.969927ms","start":"2024-11-05T17:47:37.399616Z","end":"2024-11-05T17:47:37.688586Z","steps":["trace[1320277466] 'process raft request'  (duration: 57.400873ms)","trace[1320277466] 'compare'  (duration: 199.959296ms)"],"step_count":2}
	{"level":"info","ts":"2024-11-05T17:47:38.069664Z","caller":"traceutil/trace.go:171","msg":"trace[783356345] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"146.829604ms","start":"2024-11-05T17:47:37.922819Z","end":"2024-11-05T17:47:38.069648Z","steps":["trace[783356345] 'process raft request'  (duration: 113.302622ms)","trace[783356345] 'compare'  (duration: 33.404669ms)"],"step_count":2}
	{"level":"info","ts":"2024-11-05T17:47:40.070547Z","caller":"traceutil/trace.go:171","msg":"trace[595590458] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"250.454242ms","start":"2024-11-05T17:47:39.820076Z","end":"2024-11-05T17:47:40.070530Z","steps":["trace[595590458] 'process raft request'  (duration: 250.245832ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:47:40.909345Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.233653ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T17:47:40.909491Z","caller":"traceutil/trace.go:171","msg":"trace[407263335] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:0; response_revision:430; }","duration":"131.389903ms","start":"2024-11-05T17:47:40.778084Z","end":"2024-11-05T17:47:40.909474Z","steps":["trace[407263335] 'agreement among raft nodes before linearized reading'  (duration: 131.182609ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:47:40.909735Z","caller":"traceutil/trace.go:171","msg":"trace[105230021] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"104.30137ms","start":"2024-11-05T17:47:40.805422Z","end":"2024-11-05T17:47:40.909724Z","steps":["trace[105230021] 'process raft request'  (duration: 103.565158ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:47:40.915996Z","caller":"traceutil/trace.go:171","msg":"trace[1784307454] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"110.467836ms","start":"2024-11-05T17:47:40.805509Z","end":"2024-11-05T17:47:40.915977Z","steps":["trace[1784307454] 'process raft request'  (duration: 103.686823ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:47:41.039929Z","caller":"traceutil/trace.go:171","msg":"trace[2096270239] transaction","detail":"{read_only:false; response_revision:431; number_of_response:1; }","duration":"162.384778ms","start":"2024-11-05T17:47:40.877519Z","end":"2024-11-05T17:47:41.039904Z","steps":["trace[2096270239] 'process raft request'  (duration: 49.968612ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:47:41.040215Z","caller":"traceutil/trace.go:171","msg":"trace[474042951] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"162.60689ms","start":"2024-11-05T17:47:40.877593Z","end":"2024-11-05T17:47:41.040200Z","steps":["trace[474042951] 'process raft request'  (duration: 58.920174ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:47:41.093296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.703036ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T17:47:41.093418Z","caller":"traceutil/trace.go:171","msg":"trace[396548148] range","detail":"{range_begin:/registry/clusterrolebindings/minikube-ingress-dns; range_end:; response_count:0; response_revision:437; }","duration":"210.82073ms","start":"2024-11-05T17:47:40.882581Z","end":"2024-11-05T17:47:41.093402Z","steps":["trace[396548148] 'agreement among raft nodes before linearized reading'  (duration: 175.401352ms)","trace[396548148] 'range keys from in-memory index tree'  (duration: 35.248449ms)"],"step_count":2}
	{"level":"warn","ts":"2024-11-05T17:47:41.093647Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.079167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-11-05T17:47:41.056854Z","caller":"traceutil/trace.go:171","msg":"trace[2055039178] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"179.128413ms","start":"2024-11-05T17:47:40.877638Z","end":"2024-11-05T17:47:41.056767Z","steps":["trace[2055039178] 'process raft request'  (duration: 58.923481ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:47:41.057086Z","caller":"traceutil/trace.go:171","msg":"trace[1685440622] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"179.358517ms","start":"2024-11-05T17:47:40.877713Z","end":"2024-11-05T17:47:41.057071Z","steps":["trace[1685440622] 'process raft request'  (duration: 58.87887ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:47:41.057157Z","caller":"traceutil/trace.go:171","msg":"trace[1809960471] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"179.367181ms","start":"2024-11-05T17:47:40.877778Z","end":"2024-11-05T17:47:41.057145Z","steps":["trace[1809960471] 'process raft request'  (duration: 59.284407ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:47:41.057830Z","caller":"traceutil/trace.go:171","msg":"trace[1746615062] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"161.579807ms","start":"2024-11-05T17:47:40.896156Z","end":"2024-11-05T17:47:41.057736Z","steps":["trace[1746615062] 'process raft request'  (duration: 41.202461ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:47:41.057963Z","caller":"traceutil/trace.go:171","msg":"trace[805537849] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"161.36356ms","start":"2024-11-05T17:47:40.896592Z","end":"2024-11-05T17:47:41.057956Z","steps":["trace[805537849] 'process raft request'  (duration: 40.877964ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:47:41.081367Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.308654ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:2586"}
	{"level":"warn","ts":"2024-11-05T17:47:41.092559Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.988609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T17:47:41.096955Z","caller":"traceutil/trace.go:171","msg":"trace[346104435] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"164.017974ms","start":"2024-11-05T17:47:40.932924Z","end":"2024-11-05T17:47:41.096942Z","steps":["trace[346104435] 'process raft request'  (duration: 109.511916ms)","trace[346104435] 'compare'  (duration: 50.544036ms)"],"step_count":2}
	{"level":"info","ts":"2024-11-05T17:47:41.104330Z","caller":"traceutil/trace.go:171","msg":"trace[942692642] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:437; }","duration":"221.755331ms","start":"2024-11-05T17:47:40.882558Z","end":"2024-11-05T17:47:41.104314Z","steps":["trace[942692642] 'agreement among raft nodes before linearized reading'  (duration: 175.476487ms)","trace[942692642] 'get authentication metadata'  (duration: 20.79067ms)","trace[942692642] 'range keys from in-memory index tree'  (duration: 14.769352ms)"],"step_count":3}
	{"level":"info","ts":"2024-11-05T17:47:41.114518Z","caller":"traceutil/trace.go:171","msg":"trace[960500303] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:437; }","duration":"232.458522ms","start":"2024-11-05T17:47:40.882041Z","end":"2024-11-05T17:47:41.114499Z","steps":["trace[960500303] 'agreement among raft nodes before linearized reading'  (duration: 55.846438ms)","trace[960500303] 'range keys from in-memory index tree'  (duration: 143.413281ms)"],"step_count":2}
	{"level":"info","ts":"2024-11-05T17:47:41.115448Z","caller":"traceutil/trace.go:171","msg":"trace[1242843876] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:437; }","duration":"232.803678ms","start":"2024-11-05T17:47:40.882531Z","end":"2024-11-05T17:47:41.115334Z","steps":["trace[1242843876] 'agreement among raft nodes before linearized reading'  (duration: 198.307876ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:54:10 up  1:36,  0 users,  load average: 0.52, 2.11, 2.48
	Linux addons-638421 5.15.0-1071-aws #77~20.04.1-Ubuntu SMP Thu Oct 3 19:34:36 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952] <==
	I1105 17:52:02.259939       1 main.go:301] handling current node
	I1105 17:52:12.254126       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:52:12.254160       1 main.go:301] handling current node
	I1105 17:52:22.258711       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:52:22.258742       1 main.go:301] handling current node
	I1105 17:52:32.259217       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:52:32.259257       1 main.go:301] handling current node
	I1105 17:52:42.252029       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:52:42.252063       1 main.go:301] handling current node
	I1105 17:52:52.257181       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:52:52.257212       1 main.go:301] handling current node
	I1105 17:53:02.260898       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:53:02.260931       1 main.go:301] handling current node
	I1105 17:53:12.259478       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:53:12.259520       1 main.go:301] handling current node
	I1105 17:53:22.252524       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:53:22.252574       1 main.go:301] handling current node
	I1105 17:53:32.251597       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:53:32.251723       1 main.go:301] handling current node
	I1105 17:53:42.252348       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:53:42.252382       1 main.go:301] handling current node
	I1105 17:53:52.258428       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:53:52.258463       1 main.go:301] handling current node
	I1105 17:54:02.251397       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:54:02.251524       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1105 17:49:43.243517       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.80.28:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.80.28:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.80.28:443: connect: connection refused" logger="UnhandledError"
	I1105 17:49:43.336586       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1105 17:50:28.907403       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36862: use of closed network connection
	I1105 17:50:38.175976       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.180.43"}
	I1105 17:51:08.743148       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1105 17:51:30.051636       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1105 17:51:30.052066       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1105 17:51:30.067557       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1105 17:51:30.087618       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1105 17:51:30.156275       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1105 17:51:30.156474       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1105 17:51:30.189375       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1105 17:51:30.189491       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1105 17:51:30.210103       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1105 17:51:30.210208       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1105 17:51:31.189185       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1105 17:51:31.211384       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1105 17:51:31.309394       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1105 17:51:43.832999       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1105 17:51:44.959202       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1105 17:51:49.377930       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1105 17:51:49.665307       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.222.162"}
	I1105 17:54:08.252848       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.155.234"}
	
	
	==> kube-proxy [4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f] <==
	I1105 17:47:40.297748       1 server_linux.go:66] "Using iptables proxy"
	I1105 17:47:41.548830       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1105 17:47:41.649867       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 17:47:41.804409       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1105 17:47:41.831233       1 server_linux.go:169] "Using iptables Proxier"
	I1105 17:47:41.888740       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 17:47:41.889281       1 server.go:483] "Version info" version="v1.31.2"
	I1105 17:47:41.889351       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 17:47:41.904802       1 config.go:328] "Starting node config controller"
	I1105 17:47:41.915320       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 17:47:41.914911       1 config.go:199] "Starting service config controller"
	I1105 17:47:41.971405       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 17:47:41.914932       1 config.go:105] "Starting endpoint slice config controller"
	I1105 17:47:41.971442       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 17:47:42.104715       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 17:47:42.130040       1 shared_informer.go:320] Caches are synced for node config
	I1105 17:47:42.144723       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c] <==
	W1105 17:47:29.159014       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1105 17:47:29.159024       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 17:47:29.159095       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1105 17:47:29.159113       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:47:29.159161       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1105 17:47:29.159179       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1105 17:47:29.159223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1105 17:47:29.159238       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:47:29.159279       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1105 17:47:29.159293       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:47:29.159382       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1105 17:47:29.159396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:47:29.159440       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1105 17:47:29.159450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:47:29.159544       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1105 17:47:29.159559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 17:47:29.159596       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1105 17:47:29.159607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 17:47:29.159732       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1105 17:47:29.159743       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1105 17:47:29.159854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1105 17:47:29.159867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:47:29.160131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1105 17:47:29.160151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1105 17:47:30.253645       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 05 17:52:31 addons-638421 kubelet[1497]: E1105 17:52:31.074746    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829151074552645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576286,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:52:31 addons-638421 kubelet[1497]: E1105 17:52:31.074781    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829151074552645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576286,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:52:41 addons-638421 kubelet[1497]: E1105 17:52:41.077437    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829161077180385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576286,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:52:41 addons-638421 kubelet[1497]: E1105 17:52:41.077474    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829161077180385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576286,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:52:51 addons-638421 kubelet[1497]: E1105 17:52:51.080494    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829171080226409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576286,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:52:51 addons-638421 kubelet[1497]: E1105 17:52:51.080536    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829171080226409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576286,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:52:59 addons-638421 kubelet[1497]: I1105 17:52:59.800214    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-sms7j" secret="" err="secret \"gcp-auth\" not found"
	Nov 05 17:53:01 addons-638421 kubelet[1497]: E1105 17:53:01.083595    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829181083362702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576286,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:53:01 addons-638421 kubelet[1497]: E1105 17:53:01.083632    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829181083362702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576286,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:53:11 addons-638421 kubelet[1497]: E1105 17:53:11.090991    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829191089096551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576286,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:53:11 addons-638421 kubelet[1497]: E1105 17:53:11.091031    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829191089096551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576286,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:53:15 addons-638421 kubelet[1497]: I1105 17:53:15.799732    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 05 17:53:21 addons-638421 kubelet[1497]: E1105 17:53:21.093943    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829201093727644,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576286,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:53:21 addons-638421 kubelet[1497]: E1105 17:53:21.093981    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829201093727644,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576286,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:53:31 addons-638421 kubelet[1497]: E1105 17:53:31.097412    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829211097007303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576286,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:53:31 addons-638421 kubelet[1497]: E1105 17:53:31.097453    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829211097007303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576286,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:53:35 addons-638421 kubelet[1497]: I1105 17:53:35.800161    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-dc5db94f4-hdz77" secret="" err="secret \"gcp-auth\" not found"
	Nov 05 17:53:41 addons-638421 kubelet[1497]: E1105 17:53:41.099758    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829221099545805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576286,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:53:41 addons-638421 kubelet[1497]: E1105 17:53:41.099794    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829221099545805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576286,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:53:51 addons-638421 kubelet[1497]: E1105 17:53:51.102827    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829231102582667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576286,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:53:51 addons-638421 kubelet[1497]: E1105 17:53:51.102862    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829231102582667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576286,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:54:01 addons-638421 kubelet[1497]: E1105 17:54:01.105817    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829241105552998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576286,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:54:01 addons-638421 kubelet[1497]: E1105 17:54:01.105858    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829241105552998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576286,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:54:08 addons-638421 kubelet[1497]: I1105 17:54:08.003768    1497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=138.323837796 podStartE2EDuration="2m19.003750743s" podCreationTimestamp="2024-11-05 17:51:49 +0000 UTC" firstStartedPulling="2024-11-05 17:51:49.933560373 +0000 UTC m=+259.265029666" lastFinishedPulling="2024-11-05 17:51:50.61347332 +0000 UTC m=+259.944942613" observedRunningTime="2024-11-05 17:51:50.92143787 +0000 UTC m=+260.252907180" watchObservedRunningTime="2024-11-05 17:54:08.003750743 +0000 UTC m=+397.335220036"
	Nov 05 17:54:08 addons-638421 kubelet[1497]: I1105 17:54:08.170312    1497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66gtz\" (UniqueName: \"kubernetes.io/projected/d63a7083-8468-4dbb-a1aa-2abcbe2b8503-kube-api-access-66gtz\") pod \"hello-world-app-55bf9c44b4-vt9rn\" (UID: \"d63a7083-8468-4dbb-a1aa-2abcbe2b8503\") " pod="default/hello-world-app-55bf9c44b4-vt9rn"
	
	
	==> storage-provisioner [928fd37a67be862e9b98e4c48f69508228c66790d1dbda30812c2f629b00bf18] <==
	I1105 17:48:23.204145       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1105 17:48:23.233817       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1105 17:48:23.233927       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1105 17:48:23.277526       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1105 17:48:23.277774       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-638421_30695142-7a44-4293-8e9e-e3d697d8213d!
	I1105 17:48:23.280167       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3fd2ffbe-9a3d-4013-9e87-0e75777dbe6e", APIVersion:"v1", ResourceVersion:"912", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-638421_30695142-7a44-4293-8e9e-e3d697d8213d became leader
	I1105 17:48:23.378627       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-638421_30695142-7a44-4293-8e9e-e3d697d8213d!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 17:54:09.617489  296104 logs.go:279] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-11-05T17:54:09Z" level=fatal msg="unable to determine image API version: rpc error: code = Unknown desc = lstat /var/lib/containers/storage/overlay-images/.tmp-images.json344311060: no such file or directory"

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-638421 -n addons-638421
helpers_test.go:261: (dbg) Run:  kubectl --context addons-638421 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-vtch6 ingress-nginx-admission-patch-kd86x
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-638421 describe pod ingress-nginx-admission-create-vtch6 ingress-nginx-admission-patch-kd86x
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-638421 describe pod ingress-nginx-admission-create-vtch6 ingress-nginx-admission-patch-kd86x: exit status 1 (97.414484ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-vtch6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kd86x" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-638421 describe pod ingress-nginx-admission-create-vtch6 ingress-nginx-admission-patch-kd86x: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-638421 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-638421 addons disable ingress-dns --alsologtostderr -v=1: (1.027501016s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-638421 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-638421 addons disable ingress --alsologtostderr -v=1: (7.717774651s)
--- FAIL: TestAddons/parallel/Ingress (150.92s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (319.53s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 9.361096ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-jnqlj" [d43aacca-7261-4530-9a58-1456060cb884] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003534039s
addons_test.go:402: (dbg) Run:  kubectl --context addons-638421 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-638421 top pods -n kube-system: exit status 1 (120.903621ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fc54b, age: 3m25.859753062s

                                                
                                                
** /stderr **
I1105 17:51:01.864993  285188 retry.go:31] will retry after 3.276163746s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-638421 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-638421 top pods -n kube-system: exit status 1 (90.24651ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fc54b, age: 3m29.228688895s

                                                
                                                
** /stderr **
I1105 17:51:05.231740  285188 retry.go:31] will retry after 4.906890396s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-638421 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-638421 top pods -n kube-system: exit status 1 (95.725502ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fc54b, age: 3m34.232405345s

                                                
                                                
** /stderr **
I1105 17:51:10.234950  285188 retry.go:31] will retry after 7.548502008s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-638421 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-638421 top pods -n kube-system: exit status 1 (90.403458ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fc54b, age: 3m41.871341354s

                                                
                                                
** /stderr **
I1105 17:51:17.874155  285188 retry.go:31] will retry after 7.88170783s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-638421 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-638421 top pods -n kube-system: exit status 1 (101.264859ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fc54b, age: 3m49.854481685s

                                                
                                                
** /stderr **
I1105 17:51:25.857497  285188 retry.go:31] will retry after 11.590229097s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-638421 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-638421 top pods -n kube-system: exit status 1 (87.478849ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fc54b, age: 4m1.533008942s

                                                
                                                
** /stderr **
I1105 17:51:37.535955  285188 retry.go:31] will retry after 25.860055888s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-638421 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-638421 top pods -n kube-system: exit status 1 (89.629115ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fc54b, age: 4m27.483344516s

                                                
                                                
** /stderr **
I1105 17:52:03.486147  285188 retry.go:31] will retry after 29.10495193s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-638421 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-638421 top pods -n kube-system: exit status 1 (92.499463ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fc54b, age: 4m56.683962064s

                                                
                                                
** /stderr **
I1105 17:52:32.687428  285188 retry.go:31] will retry after 1m12.565382084s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-638421 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-638421 top pods -n kube-system: exit status 1 (91.669326ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fc54b, age: 6m9.342199686s

                                                
                                                
** /stderr **
I1105 17:53:45.345295  285188 retry.go:31] will retry after 41.398433631s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-638421 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-638421 top pods -n kube-system: exit status 1 (87.894225ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fc54b, age: 6m50.829566041s

                                                
                                                
** /stderr **
I1105 17:54:26.832476  285188 retry.go:31] will retry after 1m5.760728042s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-638421 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-638421 top pods -n kube-system: exit status 1 (88.105064ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fc54b, age: 7m56.67846127s

                                                
                                                
** /stderr **
I1105 17:55:32.681623  285188 retry.go:31] will retry after 39.551001924s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-638421 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-638421 top pods -n kube-system: exit status 1 (82.991691ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fc54b, age: 8m36.313347992s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-638421
helpers_test.go:235: (dbg) docker inspect addons-638421:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bac0cd0c5efa5a79d742a82bac2bd1e6028ef79211194d383e14238cfebc209a",
	        "Created": "2024-11-05T17:47:05.571332234Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 286449,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-11-05T17:47:05.685071812Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b9c385cbd7184c9dd77d4bc379a996635e559e337cc53655e2d39219017c804c",
	        "ResolvConfPath": "/var/lib/docker/containers/bac0cd0c5efa5a79d742a82bac2bd1e6028ef79211194d383e14238cfebc209a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bac0cd0c5efa5a79d742a82bac2bd1e6028ef79211194d383e14238cfebc209a/hostname",
	        "HostsPath": "/var/lib/docker/containers/bac0cd0c5efa5a79d742a82bac2bd1e6028ef79211194d383e14238cfebc209a/hosts",
	        "LogPath": "/var/lib/docker/containers/bac0cd0c5efa5a79d742a82bac2bd1e6028ef79211194d383e14238cfebc209a/bac0cd0c5efa5a79d742a82bac2bd1e6028ef79211194d383e14238cfebc209a-json.log",
	        "Name": "/addons-638421",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-638421:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-638421",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8a9d29c12ef0d73dc50a3806d04930eac49ee2882249fa752cfae930d1715387-init/diff:/var/lib/docker/overlay2/f1c041cd086a3a2db4f768b1c920339fb85fb20492664e0532c0f72dc744887a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8a9d29c12ef0d73dc50a3806d04930eac49ee2882249fa752cfae930d1715387/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8a9d29c12ef0d73dc50a3806d04930eac49ee2882249fa752cfae930d1715387/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8a9d29c12ef0d73dc50a3806d04930eac49ee2882249fa752cfae930d1715387/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-638421",
	                "Source": "/var/lib/docker/volumes/addons-638421/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-638421",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-638421",
	                "name.minikube.sigs.k8s.io": "addons-638421",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "157bed46083984150cbf1f529a89c97d1d867f744909202dd525796c530d526f",
	            "SandboxKey": "/var/run/docker/netns/157bed460839",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-638421": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ba6034dd16840d908bc849e487ad0dfe7211406fbccbcd6ae357274076dd616b",
	                    "EndpointID": "001ab74b758066e7c297271b89b32f78f9a9a09c0ca31c083ce12b068e0d626f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-638421",
	                        "bac0cd0c5efa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-638421 -n addons-638421
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-638421 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-638421 logs -n 25: (1.35189621s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-346323 | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC |                     |
	|         | download-docker-346323                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-346323                                                                   | download-docker-346323 | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC | 05 Nov 24 17:46 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-032774   | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC |                     |
	|         | binary-mirror-032774                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34655                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-032774                                                                     | binary-mirror-032774   | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC | 05 Nov 24 17:46 UTC |
	| addons  | disable dashboard -p                                                                        | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC |                     |
	|         | addons-638421                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC |                     |
	|         | addons-638421                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-638421 --wait=true                                                                | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC | 05 Nov 24 17:50 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-638421 addons disable                                                                | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:50 UTC | 05 Nov 24 17:50 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-638421 addons disable                                                                | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:50 UTC | 05 Nov 24 17:50 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:50 UTC | 05 Nov 24 17:50 UTC |
	|         | -p addons-638421                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-638421 addons disable                                                                | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:50 UTC | 05 Nov 24 17:50 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-638421 ip                                                                            | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:50 UTC | 05 Nov 24 17:50 UTC |
	| addons  | addons-638421 addons disable                                                                | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:50 UTC | 05 Nov 24 17:50 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-638421 addons                                                                        | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:51 UTC | 05 Nov 24 17:51 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-638421 addons                                                                        | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:51 UTC | 05 Nov 24 17:51 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-638421 addons                                                                        | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:51 UTC | 05 Nov 24 17:51 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-638421 ssh curl -s                                                                   | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:51 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-638421 ip                                                                            | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:54 UTC | 05 Nov 24 17:54 UTC |
	| addons  | addons-638421 addons disable                                                                | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:54 UTC | 05 Nov 24 17:54 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-638421 addons disable                                                                | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:54 UTC | 05 Nov 24 17:54 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-638421 addons                                                                        | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:54 UTC | 05 Nov 24 17:54 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-638421 addons disable                                                                | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:54 UTC | 05 Nov 24 17:54 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-638421 ssh cat                                                                       | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:54 UTC | 05 Nov 24 17:54 UTC |
	|         | /opt/local-path-provisioner/pvc-b3573bff-9dda-4c36-88d8-bc4018837214_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-638421 addons disable                                                                | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:54 UTC | 05 Nov 24 17:54 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-638421 addons                                                                        | addons-638421          | jenkins | v1.34.0 | 05 Nov 24 17:54 UTC | 05 Nov 24 17:54 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 17:46:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 17:46:41.761718  285958 out.go:345] Setting OutFile to fd 1 ...
	I1105 17:46:41.761934  285958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:46:41.761962  285958 out.go:358] Setting ErrFile to fd 2...
	I1105 17:46:41.761981  285958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:46:41.762344  285958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-279806/.minikube/bin
	I1105 17:46:41.763442  285958 out.go:352] Setting JSON to false
	I1105 17:46:41.764316  285958 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5345,"bootTime":1730823457,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1105 17:46:41.764416  285958 start.go:139] virtualization:  
	I1105 17:46:41.766507  285958 out.go:177] * [addons-638421] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1105 17:46:41.767693  285958 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 17:46:41.767755  285958 notify.go:220] Checking for updates...
	I1105 17:46:41.770222  285958 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 17:46:41.771600  285958 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-279806/kubeconfig
	I1105 17:46:41.773029  285958 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-279806/.minikube
	I1105 17:46:41.775080  285958 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1105 17:46:41.776127  285958 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 17:46:41.777499  285958 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 17:46:41.796526  285958 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1105 17:46:41.796681  285958 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 17:46:41.854919  285958 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-11-05 17:46:41.845190001 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1105 17:46:41.855033  285958 docker.go:318] overlay module found
	I1105 17:46:41.857011  285958 out.go:177] * Using the docker driver based on user configuration
	I1105 17:46:41.858144  285958 start.go:297] selected driver: docker
	I1105 17:46:41.858158  285958 start.go:901] validating driver "docker" against <nil>
	I1105 17:46:41.858171  285958 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 17:46:41.858897  285958 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 17:46:41.913044  285958 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-11-05 17:46:41.903513589 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1105 17:46:41.913246  285958 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 17:46:41.913478  285958 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 17:46:41.914836  285958 out.go:177] * Using Docker driver with root privileges
	I1105 17:46:41.916043  285958 cni.go:84] Creating CNI manager for ""
	I1105 17:46:41.916103  285958 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1105 17:46:41.916115  285958 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1105 17:46:41.916192  285958 start.go:340] cluster config:
	{Name:addons-638421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-638421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 17:46:41.917525  285958 out.go:177] * Starting "addons-638421" primary control-plane node in "addons-638421" cluster
	I1105 17:46:41.918887  285958 cache.go:121] Beginning downloading kic base image for docker with crio
	I1105 17:46:41.920056  285958 out.go:177] * Pulling base image v0.0.45-1730282848-19883 ...
	I1105 17:46:41.921280  285958 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 17:46:41.921327  285958 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-279806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
	I1105 17:46:41.921339  285958 cache.go:56] Caching tarball of preloaded images
	I1105 17:46:41.921369  285958 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local docker daemon
	I1105 17:46:41.921425  285958 preload.go:172] Found /home/jenkins/minikube-integration/19910-279806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1105 17:46:41.921435  285958 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 17:46:41.921766  285958 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/config.json ...
	I1105 17:46:41.921793  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/config.json: {Name:mkc3898952e36435b36cca750d84ae737452ee78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:46:41.936333  285958 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 to local cache
	I1105 17:46:41.936464  285958 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local cache directory
	I1105 17:46:41.936483  285958 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local cache directory, skipping pull
	I1105 17:46:41.936487  285958 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 exists in cache, skipping pull
	I1105 17:46:41.936494  285958 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 as a tarball
	I1105 17:46:41.936500  285958 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 from local cache
	I1105 17:46:58.792525  285958 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 from cached tarball
	I1105 17:46:58.792571  285958 cache.go:194] Successfully downloaded all kic artifacts
	I1105 17:46:58.792635  285958 start.go:360] acquireMachinesLock for addons-638421: {Name:mk11f83312d48db3dadab7544a97d20493370375 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 17:46:58.792750  285958 start.go:364] duration metric: took 92.89µs to acquireMachinesLock for "addons-638421"
	I1105 17:46:58.792781  285958 start.go:93] Provisioning new machine with config: &{Name:addons-638421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-638421 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 17:46:58.792865  285958 start.go:125] createHost starting for "" (driver="docker")
	I1105 17:46:58.794387  285958 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1105 17:46:58.794637  285958 start.go:159] libmachine.API.Create for "addons-638421" (driver="docker")
	I1105 17:46:58.794672  285958 client.go:168] LocalClient.Create starting
	I1105 17:46:58.794792  285958 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem
	I1105 17:46:59.021243  285958 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem
	I1105 17:46:59.369774  285958 cli_runner.go:164] Run: docker network inspect addons-638421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1105 17:46:59.383351  285958 cli_runner.go:211] docker network inspect addons-638421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1105 17:46:59.383447  285958 network_create.go:284] running [docker network inspect addons-638421] to gather additional debugging logs...
	I1105 17:46:59.383468  285958 cli_runner.go:164] Run: docker network inspect addons-638421
	W1105 17:46:59.397087  285958 cli_runner.go:211] docker network inspect addons-638421 returned with exit code 1
	I1105 17:46:59.397115  285958 network_create.go:287] error running [docker network inspect addons-638421]: docker network inspect addons-638421: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-638421 not found
	I1105 17:46:59.397139  285958 network_create.go:289] output of [docker network inspect addons-638421]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-638421 not found
	
	** /stderr **
	I1105 17:46:59.397243  285958 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1105 17:46:59.411963  285958 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b191f0}
	I1105 17:46:59.412010  285958 network_create.go:124] attempt to create docker network addons-638421 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1105 17:46:59.412069  285958 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-638421 addons-638421
	I1105 17:46:59.477179  285958 network_create.go:108] docker network addons-638421 192.168.49.0/24 created
	I1105 17:46:59.477211  285958 kic.go:121] calculated static IP "192.168.49.2" for the "addons-638421" container
	I1105 17:46:59.477286  285958 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1105 17:46:59.490302  285958 cli_runner.go:164] Run: docker volume create addons-638421 --label name.minikube.sigs.k8s.io=addons-638421 --label created_by.minikube.sigs.k8s.io=true
	I1105 17:46:59.507116  285958 oci.go:103] Successfully created a docker volume addons-638421
	I1105 17:46:59.507199  285958 cli_runner.go:164] Run: docker run --rm --name addons-638421-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-638421 --entrypoint /usr/bin/test -v addons-638421:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 -d /var/lib
	I1105 17:47:01.518950  285958 cli_runner.go:217] Completed: docker run --rm --name addons-638421-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-638421 --entrypoint /usr/bin/test -v addons-638421:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 -d /var/lib: (2.011703603s)
	I1105 17:47:01.518980  285958 oci.go:107] Successfully prepared a docker volume addons-638421
	I1105 17:47:01.519012  285958 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 17:47:01.519032  285958 kic.go:194] Starting extracting preloaded images to volume ...
	I1105 17:47:01.519107  285958 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19910-279806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-638421:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 -I lz4 -xf /preloaded.tar -C /extractDir
	I1105 17:47:05.512133  285958 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19910-279806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-638421:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 -I lz4 -xf /preloaded.tar -C /extractDir: (3.992986229s)
	I1105 17:47:05.512166  285958 kic.go:203] duration metric: took 3.993130024s to extract preloaded images to volume ...
	W1105 17:47:05.512328  285958 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1105 17:47:05.512449  285958 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1105 17:47:05.556893  285958 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-638421 --name addons-638421 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-638421 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-638421 --network addons-638421 --ip 192.168.49.2 --volume addons-638421:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4
	I1105 17:47:05.868579  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Running}}
	I1105 17:47:05.891442  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:05.919827  285958 cli_runner.go:164] Run: docker exec addons-638421 stat /var/lib/dpkg/alternatives/iptables
	I1105 17:47:05.990441  285958 oci.go:144] the created container "addons-638421" has a running status.
	I1105 17:47:05.990527  285958 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa...
	I1105 17:47:06.224308  285958 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1105 17:47:06.252911  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:06.286105  285958 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1105 17:47:06.286126  285958 kic_runner.go:114] Args: [docker exec --privileged addons-638421 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1105 17:47:06.365080  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:06.386587  285958 machine.go:93] provisionDockerMachine start ...
	I1105 17:47:06.386689  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:06.412968  285958 main.go:141] libmachine: Using SSH client type: native
	I1105 17:47:06.413240  285958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1105 17:47:06.413249  285958 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 17:47:06.413921  285958 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1105 17:47:09.531942  285958 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-638421
	
	I1105 17:47:09.531966  285958 ubuntu.go:169] provisioning hostname "addons-638421"
	I1105 17:47:09.532033  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:09.552762  285958 main.go:141] libmachine: Using SSH client type: native
	I1105 17:47:09.553009  285958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1105 17:47:09.553027  285958 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-638421 && echo "addons-638421" | sudo tee /etc/hostname
	I1105 17:47:09.683636  285958 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-638421
	
	I1105 17:47:09.683718  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:09.699942  285958 main.go:141] libmachine: Using SSH client type: native
	I1105 17:47:09.700190  285958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1105 17:47:09.700213  285958 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-638421' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-638421/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-638421' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 17:47:09.820406  285958 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 17:47:09.820438  285958 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19910-279806/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-279806/.minikube}
	I1105 17:47:09.820468  285958 ubuntu.go:177] setting up certificates
	I1105 17:47:09.820479  285958 provision.go:84] configureAuth start
	I1105 17:47:09.820544  285958 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-638421
	I1105 17:47:09.837546  285958 provision.go:143] copyHostCerts
	I1105 17:47:09.837633  285958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-279806/.minikube/cert.pem (1123 bytes)
	I1105 17:47:09.837777  285958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-279806/.minikube/key.pem (1679 bytes)
	I1105 17:47:09.837846  285958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-279806/.minikube/ca.pem (1078 bytes)
	I1105 17:47:09.837906  285958 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca-key.pem org=jenkins.addons-638421 san=[127.0.0.1 192.168.49.2 addons-638421 localhost minikube]
	I1105 17:47:10.586317  285958 provision.go:177] copyRemoteCerts
	I1105 17:47:10.586420  285958 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 17:47:10.586479  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:10.604454  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:10.697807  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 17:47:10.720745  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1105 17:47:10.744323  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 17:47:10.767387  285958 provision.go:87] duration metric: took 946.881723ms to configureAuth
	I1105 17:47:10.767457  285958 ubuntu.go:193] setting minikube options for container-runtime
	I1105 17:47:10.767664  285958 config.go:182] Loaded profile config "addons-638421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 17:47:10.767786  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:10.784208  285958 main.go:141] libmachine: Using SSH client type: native
	I1105 17:47:10.784470  285958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33135 <nil> <nil>}
	I1105 17:47:10.784491  285958 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 17:47:11.000719  285958 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 17:47:11.000743  285958 machine.go:96] duration metric: took 4.61413714s to provisionDockerMachine
	I1105 17:47:11.000755  285958 client.go:171] duration metric: took 12.206077013s to LocalClient.Create
	I1105 17:47:11.000774  285958 start.go:167] duration metric: took 12.206137822s to libmachine.API.Create "addons-638421"
	I1105 17:47:11.000785  285958 start.go:293] postStartSetup for "addons-638421" (driver="docker")
	I1105 17:47:11.000800  285958 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 17:47:11.000878  285958 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 17:47:11.000931  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:11.018295  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:11.110540  285958 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 17:47:11.114113  285958 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1105 17:47:11.114157  285958 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1105 17:47:11.114168  285958 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1105 17:47:11.114180  285958 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1105 17:47:11.114195  285958 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-279806/.minikube/addons for local assets ...
	I1105 17:47:11.114267  285958 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-279806/.minikube/files for local assets ...
	I1105 17:47:11.114298  285958 start.go:296] duration metric: took 113.503361ms for postStartSetup
	I1105 17:47:11.114623  285958 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-638421
	I1105 17:47:11.131435  285958 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/config.json ...
	I1105 17:47:11.131722  285958 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 17:47:11.131766  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:11.148205  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:11.233960  285958 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1105 17:47:11.238376  285958 start.go:128] duration metric: took 12.445496117s to createHost
	I1105 17:47:11.238402  285958 start.go:83] releasing machines lock for "addons-638421", held for 12.445636753s
	I1105 17:47:11.238477  285958 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-638421
	I1105 17:47:11.255490  285958 ssh_runner.go:195] Run: cat /version.json
	I1105 17:47:11.255549  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:11.255793  285958 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 17:47:11.255869  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:11.273443  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:11.287089  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:11.359919  285958 ssh_runner.go:195] Run: systemctl --version
	I1105 17:47:11.491873  285958 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 17:47:11.635716  285958 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 17:47:11.640093  285958 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 17:47:11.660024  285958 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1105 17:47:11.660108  285958 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 17:47:11.694552  285958 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1105 17:47:11.694584  285958 start.go:495] detecting cgroup driver to use...
	I1105 17:47:11.694618  285958 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1105 17:47:11.694690  285958 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 17:47:11.713139  285958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 17:47:11.725948  285958 docker.go:217] disabling cri-docker service (if available) ...
	I1105 17:47:11.726014  285958 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 17:47:11.741011  285958 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 17:47:11.756862  285958 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 17:47:11.845196  285958 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 17:47:11.940836  285958 docker.go:233] disabling docker service ...
	I1105 17:47:11.940904  285958 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 17:47:11.960736  285958 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 17:47:11.972166  285958 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 17:47:12.063966  285958 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 17:47:12.158434  285958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 17:47:12.170405  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 17:47:12.186411  285958 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 17:47:12.186489  285958 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:47:12.195952  285958 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 17:47:12.196035  285958 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:47:12.205847  285958 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:47:12.215664  285958 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:47:12.225088  285958 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 17:47:12.234213  285958 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:47:12.243658  285958 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:47:12.259504  285958 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:47:12.269709  285958 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 17:47:12.278550  285958 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 17:47:12.287055  285958 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 17:47:12.372681  285958 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 17:47:12.486734  285958 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 17:47:12.486895  285958 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 17:47:12.490499  285958 start.go:563] Will wait 60s for crictl version
	I1105 17:47:12.490570  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:47:12.494692  285958 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 17:47:12.533081  285958 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1105 17:47:12.533181  285958 ssh_runner.go:195] Run: crio --version
	I1105 17:47:12.571577  285958 ssh_runner.go:195] Run: crio --version
	I1105 17:47:12.609043  285958 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1105 17:47:12.610320  285958 cli_runner.go:164] Run: docker network inspect addons-638421 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1105 17:47:12.625644  285958 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1105 17:47:12.629315  285958 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 17:47:12.640038  285958 kubeadm.go:883] updating cluster {Name:addons-638421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-638421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 17:47:12.640170  285958 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 17:47:12.640229  285958 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 17:47:12.718531  285958 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 17:47:12.718557  285958 crio.go:433] Images already preloaded, skipping extraction
	I1105 17:47:12.718611  285958 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 17:47:12.757651  285958 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 17:47:12.757673  285958 cache_images.go:84] Images are preloaded, skipping loading
	I1105 17:47:12.757681  285958 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1105 17:47:12.757772  285958 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-638421 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-638421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 17:47:12.757859  285958 ssh_runner.go:195] Run: crio config
	I1105 17:47:12.813091  285958 cni.go:84] Creating CNI manager for ""
	I1105 17:47:12.813112  285958 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1105 17:47:12.813122  285958 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 17:47:12.813145  285958 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-638421 NodeName:addons-638421 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 17:47:12.813278  285958 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-638421"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 17:47:12.813350  285958 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 17:47:12.821923  285958 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 17:47:12.821998  285958 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 17:47:12.830421  285958 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1105 17:47:12.848363  285958 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 17:47:12.866525  285958 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1105 17:47:12.883613  285958 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1105 17:47:12.887037  285958 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 17:47:12.897614  285958 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 17:47:12.984474  285958 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 17:47:12.997540  285958 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421 for IP: 192.168.49.2
	I1105 17:47:12.997562  285958 certs.go:194] generating shared ca certs ...
	I1105 17:47:12.997579  285958 certs.go:226] acquiring lock for ca certs: {Name:mk7e394808202081d7250bf8ad59a3f119279ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:12.997700  285958 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.key
	I1105 17:47:13.727210  285958 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt ...
	I1105 17:47:13.727284  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt: {Name:mkf1106f42f4bd8b4e9cc0c09cf43e224d6e4d77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:13.727499  285958 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-279806/.minikube/ca.key ...
	I1105 17:47:13.727538  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/ca.key: {Name:mk70791accfe1ce1ee535bb8717477a0b263e077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:13.728161  285958 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.key
	I1105 17:47:14.043503  285958 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.crt ...
	I1105 17:47:14.043543  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.crt: {Name:mkb9e298515dcba1584664fd6752a7c87593fd93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:14.043752  285958 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.key ...
	I1105 17:47:14.043766  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.key: {Name:mkfabb63e26b0da996b5cde4c5ac31decabeaf9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:14.043848  285958 certs.go:256] generating profile certs ...
	I1105 17:47:14.043948  285958 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.key
	I1105 17:47:14.043967  285958 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt with IP's: []
	I1105 17:47:14.233310  285958 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt ...
	I1105 17:47:14.233350  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: {Name:mk1d5b6c538ba9338a12a3484f12513b45bd70ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:14.233539  285958 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.key ...
	I1105 17:47:14.233553  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.key: {Name:mk17f6f5a6828ae04d86564391c29b09b2849add Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:14.234123  285958 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.key.6136f6be
	I1105 17:47:14.234154  285958 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.crt.6136f6be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1105 17:47:15.042502  285958 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.crt.6136f6be ...
	I1105 17:47:15.042539  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.crt.6136f6be: {Name:mkab983ff08f02b24e234d0f10aaba5016e18b1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:15.042742  285958 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.key.6136f6be ...
	I1105 17:47:15.042759  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.key.6136f6be: {Name:mk63d3a13d47642ed23e104d9b25369657e35819 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:15.042853  285958 certs.go:381] copying /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.crt.6136f6be -> /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.crt
	I1105 17:47:15.042943  285958 certs.go:385] copying /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.key.6136f6be -> /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.key
	I1105 17:47:15.043026  285958 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/proxy-client.key
	I1105 17:47:15.043051  285958 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/proxy-client.crt with IP's: []
	I1105 17:47:15.788296  285958 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/proxy-client.crt ...
	I1105 17:47:15.788330  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/proxy-client.crt: {Name:mkda310dc6b34ccb2fe27b446ae3b24645ee5362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:15.788519  285958 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/proxy-client.key ...
	I1105 17:47:15.788533  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/proxy-client.key: {Name:mkf7e666d49ad0feba5515de915e6a1270ef2c39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:15.788759  285958 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 17:47:15.788802  285958 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem (1078 bytes)
	I1105 17:47:15.788833  285958 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem (1123 bytes)
	I1105 17:47:15.788865  285958 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem (1679 bytes)
	I1105 17:47:15.789466  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 17:47:15.815522  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 17:47:15.840674  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 17:47:15.866178  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 17:47:15.890225  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1105 17:47:15.913946  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 17:47:15.937020  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 17:47:15.965051  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 17:47:16.000755  285958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 17:47:16.033733  285958 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 17:47:16.052384  285958 ssh_runner.go:195] Run: openssl version
	I1105 17:47:16.058042  285958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 17:47:16.067905  285958 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 17:47:16.071591  285958 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:47 /usr/share/ca-certificates/minikubeCA.pem
	I1105 17:47:16.071689  285958 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 17:47:16.078824  285958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 17:47:16.088525  285958 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 17:47:16.092018  285958 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 17:47:16.092097  285958 kubeadm.go:392] StartCluster: {Name:addons-638421 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-638421 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 17:47:16.092201  285958 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 17:47:16.092267  285958 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 17:47:16.130283  285958 cri.go:89] found id: ""
	I1105 17:47:16.130399  285958 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 17:47:16.139239  285958 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 17:47:16.148284  285958 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1105 17:47:16.148378  285958 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 17:47:16.157060  285958 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 17:47:16.157082  285958 kubeadm.go:157] found existing configuration files:
	
	I1105 17:47:16.157157  285958 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 17:47:16.166021  285958 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 17:47:16.166089  285958 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 17:47:16.174553  285958 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 17:47:16.183626  285958 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 17:47:16.183718  285958 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 17:47:16.192093  285958 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 17:47:16.201231  285958 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 17:47:16.201318  285958 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 17:47:16.209513  285958 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 17:47:16.218356  285958 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 17:47:16.218420  285958 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 17:47:16.226808  285958 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1105 17:47:16.267014  285958 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 17:47:16.267185  285958 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 17:47:16.287335  285958 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1105 17:47:16.287410  285958 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1071-aws
	I1105 17:47:16.287450  285958 kubeadm.go:310] OS: Linux
	I1105 17:47:16.287501  285958 kubeadm.go:310] CGROUPS_CPU: enabled
	I1105 17:47:16.287554  285958 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1105 17:47:16.287603  285958 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1105 17:47:16.287655  285958 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1105 17:47:16.287706  285958 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1105 17:47:16.287761  285958 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1105 17:47:16.287809  285958 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1105 17:47:16.287861  285958 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1105 17:47:16.287910  285958 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1105 17:47:16.344112  285958 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 17:47:16.344300  285958 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 17:47:16.344452  285958 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 17:47:16.352910  285958 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 17:47:16.355627  285958 out.go:235]   - Generating certificates and keys ...
	I1105 17:47:16.355805  285958 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 17:47:16.355910  285958 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 17:47:16.899212  285958 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1105 17:47:17.316557  285958 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1105 17:47:18.092012  285958 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1105 17:47:18.343114  285958 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1105 17:47:18.892066  285958 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1105 17:47:18.892399  285958 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-638421 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1105 17:47:19.304773  285958 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1105 17:47:19.305110  285958 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-638421 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1105 17:47:19.681655  285958 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1105 17:47:20.165817  285958 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1105 17:47:20.341606  285958 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1105 17:47:20.341939  285958 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 17:47:21.057527  285958 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 17:47:21.648347  285958 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 17:47:22.357901  285958 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 17:47:22.621691  285958 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 17:47:22.926682  285958 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 17:47:22.927502  285958 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 17:47:22.932509  285958 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 17:47:22.934214  285958 out.go:235]   - Booting up control plane ...
	I1105 17:47:22.934311  285958 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 17:47:22.934388  285958 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 17:47:22.935474  285958 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 17:47:22.944765  285958 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 17:47:22.951090  285958 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 17:47:22.951145  285958 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 17:47:23.044002  285958 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 17:47:23.044138  285958 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 17:47:24.045502  285958 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001636577s
	I1105 17:47:24.045593  285958 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 17:47:30.047912  285958 kubeadm.go:310] [api-check] The API server is healthy after 6.002383879s
	I1105 17:47:30.069330  285958 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 17:47:30.085527  285958 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 17:47:30.115355  285958 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 17:47:30.115560  285958 kubeadm.go:310] [mark-control-plane] Marking the node addons-638421 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 17:47:30.127547  285958 kubeadm.go:310] [bootstrap-token] Using token: rsv0a1.q27lp5o52vrw8wgr
	I1105 17:47:30.130380  285958 out.go:235]   - Configuring RBAC rules ...
	I1105 17:47:30.130535  285958 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 17:47:30.134622  285958 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 17:47:30.143923  285958 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 17:47:30.150003  285958 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 17:47:30.154238  285958 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 17:47:30.158158  285958 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 17:47:30.454634  285958 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 17:47:30.931775  285958 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 17:47:31.454571  285958 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 17:47:31.455604  285958 kubeadm.go:310] 
	I1105 17:47:31.455683  285958 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 17:47:31.455690  285958 kubeadm.go:310] 
	I1105 17:47:31.455767  285958 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 17:47:31.455771  285958 kubeadm.go:310] 
	I1105 17:47:31.455797  285958 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 17:47:31.455862  285958 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 17:47:31.455914  285958 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 17:47:31.455919  285958 kubeadm.go:310] 
	I1105 17:47:31.455972  285958 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 17:47:31.455977  285958 kubeadm.go:310] 
	I1105 17:47:31.456024  285958 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 17:47:31.456032  285958 kubeadm.go:310] 
	I1105 17:47:31.456083  285958 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 17:47:31.456158  285958 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 17:47:31.456227  285958 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 17:47:31.456231  285958 kubeadm.go:310] 
	I1105 17:47:31.456314  285958 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 17:47:31.456391  285958 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 17:47:31.456396  285958 kubeadm.go:310] 
	I1105 17:47:31.456479  285958 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rsv0a1.q27lp5o52vrw8wgr \
	I1105 17:47:31.456583  285958 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e7145c6c1814668d016f7eaa1b0396fc58dc6956712e65f29fc86a3e27d67eb \
	I1105 17:47:31.456622  285958 kubeadm.go:310] 	--control-plane 
	I1105 17:47:31.456628  285958 kubeadm.go:310] 
	I1105 17:47:31.456713  285958 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 17:47:31.456717  285958 kubeadm.go:310] 
	I1105 17:47:31.456803  285958 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rsv0a1.q27lp5o52vrw8wgr \
	I1105 17:47:31.456906  285958 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e7145c6c1814668d016f7eaa1b0396fc58dc6956712e65f29fc86a3e27d67eb 
	I1105 17:47:31.461045  285958 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1071-aws\n", err: exit status 1
	I1105 17:47:31.461159  285958 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 17:47:31.461175  285958 cni.go:84] Creating CNI manager for ""
	I1105 17:47:31.461184  285958 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1105 17:47:31.464098  285958 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1105 17:47:31.466997  285958 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1105 17:47:31.470722  285958 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1105 17:47:31.470744  285958 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1105 17:47:31.488265  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1105 17:47:31.763631  285958 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 17:47:31.763768  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:47:31.763852  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-638421 minikube.k8s.io/updated_at=2024_11_05T17_47_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=addons-638421 minikube.k8s.io/primary=true
	I1105 17:47:31.771708  285958 ops.go:34] apiserver oom_adj: -16
	I1105 17:47:31.897543  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:47:32.398359  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:47:32.898411  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:47:33.398331  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:47:33.898202  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:47:34.398282  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:47:34.897726  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:47:35.397920  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:47:35.898506  285958 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:47:35.989022  285958 kubeadm.go:1113] duration metric: took 4.225299232s to wait for elevateKubeSystemPrivileges
	I1105 17:47:35.989051  285958 kubeadm.go:394] duration metric: took 19.896984389s to StartCluster
	I1105 17:47:35.989068  285958 settings.go:142] acquiring lock: {Name:mk4446dbaea3bd85b9adc705341ee771323ec865 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:35.989199  285958 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-279806/kubeconfig
	I1105 17:47:35.990064  285958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/kubeconfig: {Name:mk94e1e77f14516629f7a9763439bf1ac2a3fdbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:47:35.993401  285958 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 17:47:35.993835  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1105 17:47:35.994240  285958 config.go:182] Loaded profile config "addons-638421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 17:47:35.994293  285958 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1105 17:47:35.994380  285958 addons.go:69] Setting yakd=true in profile "addons-638421"
	I1105 17:47:35.994408  285958 addons.go:234] Setting addon yakd=true in "addons-638421"
	I1105 17:47:35.994436  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:35.994926  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:35.995201  285958 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-638421"
	I1105 17:47:35.995219  285958 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-638421"
	I1105 17:47:35.995245  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:35.995629  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:35.996250  285958 addons.go:69] Setting cloud-spanner=true in profile "addons-638421"
	I1105 17:47:35.996274  285958 addons.go:234] Setting addon cloud-spanner=true in "addons-638421"
	I1105 17:47:35.996299  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:35.996731  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.003798  285958 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-638421"
	I1105 17:47:36.003868  285958 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-638421"
	I1105 17:47:36.003902  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.004404  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.004774  285958 out.go:177] * Verifying Kubernetes components...
	I1105 17:47:36.007210  285958 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-638421"
	I1105 17:47:36.007246  285958 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-638421"
	I1105 17:47:36.007286  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.012923  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.013129  285958 addons.go:69] Setting registry=true in profile "addons-638421"
	I1105 17:47:36.013178  285958 addons.go:234] Setting addon registry=true in "addons-638421"
	I1105 17:47:36.013229  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.013716  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.021146  285958 addons.go:69] Setting default-storageclass=true in profile "addons-638421"
	I1105 17:47:36.021175  285958 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-638421"
	I1105 17:47:36.021499  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.029011  285958 addons.go:69] Setting storage-provisioner=true in profile "addons-638421"
	I1105 17:47:36.029053  285958 addons.go:234] Setting addon storage-provisioner=true in "addons-638421"
	I1105 17:47:36.029088  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.029552  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.044836  285958 addons.go:69] Setting gcp-auth=true in profile "addons-638421"
	I1105 17:47:36.044869  285958 mustload.go:65] Loading cluster: addons-638421
	I1105 17:47:36.045063  285958 config.go:182] Loaded profile config "addons-638421": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 17:47:36.045305  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.063111  285958 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-638421"
	I1105 17:47:36.063194  285958 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-638421"
	I1105 17:47:36.063574  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.063856  285958 addons.go:69] Setting ingress=true in profile "addons-638421"
	I1105 17:47:36.063874  285958 addons.go:234] Setting addon ingress=true in "addons-638421"
	I1105 17:47:36.063910  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.064284  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.092395  285958 addons.go:69] Setting ingress-dns=true in profile "addons-638421"
	I1105 17:47:36.092424  285958 addons.go:234] Setting addon ingress-dns=true in "addons-638421"
	I1105 17:47:36.092473  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.092950  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.105420  285958 addons.go:69] Setting volcano=true in profile "addons-638421"
	I1105 17:47:36.105463  285958 addons.go:234] Setting addon volcano=true in "addons-638421"
	I1105 17:47:36.105500  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.105962  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.110397  285958 addons.go:69] Setting inspektor-gadget=true in profile "addons-638421"
	I1105 17:47:36.110424  285958 addons.go:234] Setting addon inspektor-gadget=true in "addons-638421"
	I1105 17:47:36.110461  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.110912  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.127909  285958 addons.go:69] Setting metrics-server=true in profile "addons-638421"
	I1105 17:47:36.127938  285958 addons.go:234] Setting addon metrics-server=true in "addons-638421"
	I1105 17:47:36.127975  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.128429  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.138630  285958 addons.go:69] Setting volumesnapshots=true in profile "addons-638421"
	I1105 17:47:36.138664  285958 addons.go:234] Setting addon volumesnapshots=true in "addons-638421"
	I1105 17:47:36.138723  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.139193  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.156122  285958 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 17:47:36.163572  285958 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1105 17:47:36.166490  285958 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1105 17:47:36.166513  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1105 17:47:36.166624  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.253238  285958 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1105 17:47:36.253579  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1105 17:47:36.255802  285958 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1105 17:47:36.255983  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1105 17:47:36.256156  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.270952  285958 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1105 17:47:36.271119  285958 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1105 17:47:36.275006  285958 out.go:177]   - Using image docker.io/registry:2.8.3
	I1105 17:47:36.278006  285958 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1105 17:47:36.278648  285958 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1105 17:47:36.278665  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1105 17:47:36.278730  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.281564  285958 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-638421"
	I1105 17:47:36.281607  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.282015  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	W1105 17:47:36.284923  285958 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1105 17:47:36.285029  285958 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1105 17:47:36.285101  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.288804  285958 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 17:47:36.289653  285958 addons.go:234] Setting addon default-storageclass=true in "addons-638421"
	I1105 17:47:36.289682  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:36.290087  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:36.290233  285958 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1105 17:47:36.290440  285958 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1105 17:47:36.305885  285958 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1105 17:47:36.313355  285958 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1105 17:47:36.313378  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1105 17:47:36.313438  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.313585  285958 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1105 17:47:36.313749  285958 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1105 17:47:36.313759  285958 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1105 17:47:36.313800  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.318224  285958 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1105 17:47:36.318469  285958 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 17:47:36.318484  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 17:47:36.318536  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.322284  285958 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1105 17:47:36.322305  285958 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1105 17:47:36.322378  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.331771  285958 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1105 17:47:36.331794  285958 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1105 17:47:36.331855  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.346251  285958 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 17:47:36.346276  285958 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 17:47:36.346342  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.350374  285958 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1105 17:47:36.350403  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1105 17:47:36.350465  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.365104  285958 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1105 17:47:36.368741  285958 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1105 17:47:36.370854  285958 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1105 17:47:36.373152  285958 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1105 17:47:36.376920  285958 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1105 17:47:36.376945  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1105 17:47:36.377012  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.377199  285958 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1105 17:47:36.387932  285958 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1105 17:47:36.392425  285958 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1105 17:47:36.421407  285958 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1105 17:47:36.425022  285958 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1105 17:47:36.427264  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.428667  285958 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1105 17:47:36.428691  285958 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1105 17:47:36.428753  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.429250  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.481663  285958 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 17:47:36.481741  285958 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 17:47:36.481837  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.507850  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.523526  285958 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1105 17:47:36.528177  285958 out.go:177]   - Using image docker.io/busybox:stable
	I1105 17:47:36.534271  285958 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1105 17:47:36.534300  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1105 17:47:36.534372  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:36.536693  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.567122  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.575846  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.584845  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.585612  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.585732  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.595438  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.600689  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	W1105 17:47:36.602331  285958 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1105 17:47:36.602356  285958 retry.go:31] will retry after 308.370699ms: ssh: handshake failed: EOF
	I1105 17:47:36.608800  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.627581  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	W1105 17:47:36.629425  285958 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1105 17:47:36.629451  285958 retry.go:31] will retry after 317.71354ms: ssh: handshake failed: EOF
	I1105 17:47:36.648779  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:36.816392  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1105 17:47:36.861685  285958 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 17:47:36.883858  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1105 17:47:36.890675  285958 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1105 17:47:36.890701  285958 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1105 17:47:36.968860  285958 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1105 17:47:36.968891  285958 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1105 17:47:36.975902  285958 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1105 17:47:36.975925  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1105 17:47:37.005390  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1105 17:47:37.009797  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 17:47:37.047570  285958 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1105 17:47:37.047646  285958 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1105 17:47:37.057609  285958 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 17:47:37.057635  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1105 17:47:37.090752  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1105 17:47:37.114852  285958 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1105 17:47:37.114878  285958 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1105 17:47:37.127837  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1105 17:47:37.133138  285958 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1105 17:47:37.133161  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1105 17:47:37.137825  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1105 17:47:37.156172  285958 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1105 17:47:37.156195  285958 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1105 17:47:37.180478  285958 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1105 17:47:37.180509  285958 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1105 17:47:37.194979  285958 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 17:47:37.195009  285958 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 17:47:37.279721  285958 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1105 17:47:37.279750  285958 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1105 17:47:37.318028  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1105 17:47:37.321284  285958 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1105 17:47:37.321309  285958 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1105 17:47:37.358544  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 17:47:37.370080  285958 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1105 17:47:37.370107  285958 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1105 17:47:37.405079  285958 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 17:47:37.405107  285958 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 17:47:37.410080  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1105 17:47:37.458380  285958 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1105 17:47:37.458423  285958 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1105 17:47:37.475543  285958 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1105 17:47:37.475575  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1105 17:47:37.533876  285958 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1105 17:47:37.533915  285958 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1105 17:47:37.568722  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 17:47:37.581760  285958 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1105 17:47:37.581804  285958 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1105 17:47:37.674118  285958 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1105 17:47:37.674144  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1105 17:47:37.702915  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1105 17:47:37.712158  285958 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1105 17:47:37.712197  285958 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1105 17:47:37.774962  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1105 17:47:37.815972  285958 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1105 17:47:37.815996  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1105 17:47:37.949834  285958 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1105 17:47:37.949873  285958 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1105 17:47:38.009830  285958 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1105 17:47:38.009856  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1105 17:47:38.123769  285958 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1105 17:47:38.123801  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1105 17:47:38.212324  285958 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1105 17:47:38.212351  285958 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1105 17:47:38.287388  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1105 17:47:38.525738  285958 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.272134647s)
	I1105 17:47:38.525776  285958 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1105 17:47:40.112730  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.296284303s)
	I1105 17:47:40.112790  285958 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.251082635s)
	I1105 17:47:40.113712  285958 node_ready.go:35] waiting up to 6m0s for node "addons-638421" to be "Ready" ...
	I1105 17:47:40.210419  285958 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-638421" context rescaled to 1 replicas
	I1105 17:47:40.858898  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.975003586s)
	I1105 17:47:42.172274  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:47:43.000625  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.995180215s)
	I1105 17:47:43.000813  285958 addons.go:475] Verifying addon ingress=true in "addons-638421"
	I1105 17:47:43.000837  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.872971503s)
	I1105 17:47:43.000929  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.863076106s)
	I1105 17:47:43.000953  285958 addons.go:475] Verifying addon registry=true in "addons-638421"
	I1105 17:47:43.000734  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.99091197s)
	I1105 17:47:43.000787  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.909969776s)
	I1105 17:47:43.001426  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.683362299s)
	I1105 17:47:43.001475  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.642908318s)
	I1105 17:47:43.001525  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.591423852s)
	I1105 17:47:43.001676  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.432923369s)
	I1105 17:47:43.001690  285958 addons.go:475] Verifying addon metrics-server=true in "addons-638421"
	I1105 17:47:43.001731  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.298788469s)
	I1105 17:47:43.003682  285958 out.go:177] * Verifying ingress addon...
	I1105 17:47:43.003778  285958 out.go:177] * Verifying registry addon...
	I1105 17:47:43.003834  285958 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-638421 service yakd-dashboard -n yakd-dashboard
	
	I1105 17:47:43.006467  285958 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1105 17:47:43.008189  285958 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1105 17:47:43.035783  285958 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1105 17:47:43.035812  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1105 17:47:43.055345  285958 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1105 17:47:43.056432  285958 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1105 17:47:43.056452  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:43.116803  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.341796752s)
	W1105 17:47:43.116844  285958 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1105 17:47:43.116887  285958 retry.go:31] will retry after 255.002227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1105 17:47:43.315704  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.028263294s)
	I1105 17:47:43.315794  285958 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-638421"
	I1105 17:47:43.320304  285958 out.go:177] * Verifying csi-hostpath-driver addon...
	I1105 17:47:43.323876  285958 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1105 17:47:43.334295  285958 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1105 17:47:43.334365  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:43.372993  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1105 17:47:43.520254  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:43.521595  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:43.827884  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:44.011610  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:44.012295  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:44.327871  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:44.511171  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:44.513108  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:44.617005  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:47:44.827670  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:45.011750  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:45.013127  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:45.328480  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:45.510813  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:45.512540  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:45.832333  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:46.012586  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:46.014902  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:46.049040  285958 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.675987895s)
	I1105 17:47:46.328243  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:46.511888  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:46.512947  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:46.617359  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:47:46.827348  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:47.010583  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:47.012548  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:47.030183  285958 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1105 17:47:47.030269  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:47.047719  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:47.146233  285958 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1105 17:47:47.164339  285958 addons.go:234] Setting addon gcp-auth=true in "addons-638421"
	I1105 17:47:47.164403  285958 host.go:66] Checking if "addons-638421" exists ...
	I1105 17:47:47.164899  285958 cli_runner.go:164] Run: docker container inspect addons-638421 --format={{.State.Status}}
	I1105 17:47:47.187618  285958 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1105 17:47:47.187676  285958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-638421
	I1105 17:47:47.211580  285958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33135 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/addons-638421/id_rsa Username:docker}
	I1105 17:47:47.316058  285958 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1105 17:47:47.324266  285958 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1105 17:47:47.327421  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:47.331960  285958 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1105 17:47:47.331987  285958 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1105 17:47:47.350260  285958 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1105 17:47:47.350283  285958 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1105 17:47:47.368208  285958 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1105 17:47:47.368229  285958 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1105 17:47:47.386115  285958 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1105 17:47:47.512570  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:47.513265  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:47.830470  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:47.928919  285958 addons.go:475] Verifying addon gcp-auth=true in "addons-638421"
	I1105 17:47:47.933449  285958 out.go:177] * Verifying gcp-auth addon...
	I1105 17:47:47.937071  285958 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1105 17:47:47.942134  285958 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1105 17:47:47.942192  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:48.042967  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:48.043819  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:48.327719  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:48.440810  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:48.511006  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:48.511638  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:48.827772  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:48.940336  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:49.010709  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:49.011926  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:49.116709  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:47:49.328079  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:49.440342  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:49.510786  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:49.511999  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:49.827541  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:49.940129  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:50.012396  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:50.012641  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:50.328255  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:50.441154  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:50.511105  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:50.511657  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:50.828086  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:50.940600  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:51.012260  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:51.013072  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:51.117573  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:47:51.327645  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:51.440922  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:51.510825  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:51.511543  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:51.829139  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:51.940734  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:52.011742  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:52.012924  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:52.327912  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:52.440588  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:52.510998  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:52.512219  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:52.827258  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:52.940834  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:53.010790  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:53.011838  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:53.327341  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:53.440646  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:53.511491  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:53.511490  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:53.617584  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:47:53.828195  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:53.940928  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:54.011669  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:54.013357  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:54.327009  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:54.440676  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:54.510334  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:54.511719  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:54.827926  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:54.939937  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:55.010848  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:55.012337  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:55.327608  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:55.440204  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:55.511666  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:55.512968  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:55.828004  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:55.940260  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:56.011260  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:56.011685  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:56.117542  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:47:56.327635  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:56.440325  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:56.510728  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:56.513772  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:56.827882  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:56.940947  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:57.011394  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:57.011648  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:57.327991  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:57.440530  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:57.510492  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:57.512121  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:57.828375  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:57.941729  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:58.012254  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:58.013503  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:58.117642  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:47:58.328560  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:58.441114  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:58.511342  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:58.513019  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:58.827301  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:58.941013  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:59.011204  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:59.011554  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:59.327392  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:59.440632  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:47:59.510797  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:47:59.511707  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:47:59.827761  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:47:59.940959  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:00.042819  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:00.044680  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:00.118404  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:48:00.327868  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:00.440681  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:00.510721  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:00.512303  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:00.827450  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:00.940713  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:01.011078  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:01.012280  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:01.328043  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:01.440510  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:01.513038  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:01.518656  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:01.828284  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:01.940681  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:02.011217  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:02.012391  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:02.328216  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:02.440649  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:02.512098  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:02.512349  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:02.617551  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:48:02.828777  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:02.941147  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:03.010404  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:03.012802  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:03.327820  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:03.440325  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:03.511311  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:03.512011  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:03.829624  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:03.940803  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:04.011207  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:04.012258  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:04.327904  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:04.442150  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:04.510872  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:04.512687  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:04.826914  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:04.940529  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:05.011332  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:05.012071  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:05.117340  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:48:05.330252  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:05.440541  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:05.512036  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:05.514146  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:05.827774  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:05.940558  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:06.010539  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:06.013067  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:06.328086  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:06.440707  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:06.510264  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:06.512822  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:06.827376  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:06.941418  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:07.011058  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:07.012307  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:07.117828  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:48:07.327059  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:07.440051  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:07.510355  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:07.511671  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:07.827838  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:07.941181  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:08.010965  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:08.012529  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:08.326975  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:08.441048  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:08.510450  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:08.512005  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:08.827555  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:08.941184  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:09.010864  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:09.012156  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:09.328025  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:09.440673  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:09.510272  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:09.511692  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:09.617442  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:48:09.827390  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:09.940801  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:10.011042  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:10.012596  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:10.328301  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:10.440485  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:10.510132  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:10.512648  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:10.827512  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:10.940274  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:11.010716  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:11.011217  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:11.327907  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:11.440874  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:11.513465  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:11.514475  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:11.827818  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:11.940397  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:12.010836  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:12.013090  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:12.116808  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:48:12.327162  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:12.440582  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:12.510508  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:12.511899  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:12.828075  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:12.940072  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:13.011307  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:13.011485  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:13.327697  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:13.440277  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:13.511412  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:13.512013  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:13.827173  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:13.940846  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:14.011342  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:14.012471  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:14.117452  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:48:14.327492  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:14.441135  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:14.510799  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:14.512243  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:14.827869  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:14.940583  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:15.010886  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:15.012979  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:15.327501  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:15.440891  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:15.511130  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:15.511175  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:15.827674  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:15.940807  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:16.010777  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:16.012167  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:16.117644  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:48:16.327727  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:16.440886  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:16.510879  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:16.512150  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:16.827159  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:16.940331  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:17.010883  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:17.012031  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:17.327885  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:17.441214  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:17.510962  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:17.512161  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:17.827922  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:17.940378  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:18.010836  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:18.011541  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:18.328095  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:18.440580  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:18.510990  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:18.511353  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:18.617206  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:48:18.827973  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:18.941364  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:19.011807  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:19.012848  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:19.327683  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:19.441149  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:19.510950  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:19.511991  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:19.828005  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:19.941917  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:20.011735  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:20.013175  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:20.327710  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:20.440685  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:20.510847  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:20.512733  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:20.617474  285958 node_ready.go:53] node "addons-638421" has status "Ready":"False"
	I1105 17:48:20.827808  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:20.941064  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:21.010951  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:21.011831  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:21.327667  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:21.440958  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:21.511595  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:21.512315  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:21.827152  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:21.940868  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:22.010372  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:22.012844  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:22.327560  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:22.441047  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:22.511193  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:22.511996  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:22.634241  285958 node_ready.go:49] node "addons-638421" has status "Ready":"True"
	I1105 17:48:22.634267  285958 node_ready.go:38] duration metric: took 42.520529112s for node "addons-638421" to be "Ready" ...
	I1105 17:48:22.634279  285958 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 17:48:22.657774  285958 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fc54b" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:22.832590  285958 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1105 17:48:22.832639  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:23.069865  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:23.070777  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:23.106850  285958 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1105 17:48:23.106877  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:23.337024  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:23.443489  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:23.544434  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:23.546163  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:23.833966  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:23.940500  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:24.011993  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:24.012934  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:24.164722  285958 pod_ready.go:93] pod "coredns-7c65d6cfc9-fc54b" in "kube-system" namespace has status "Ready":"True"
	I1105 17:48:24.164753  285958 pod_ready.go:82] duration metric: took 1.506952096s for pod "coredns-7c65d6cfc9-fc54b" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.164776  285958 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-638421" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.170006  285958 pod_ready.go:93] pod "etcd-addons-638421" in "kube-system" namespace has status "Ready":"True"
	I1105 17:48:24.170032  285958 pod_ready.go:82] duration metric: took 5.247173ms for pod "etcd-addons-638421" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.170047  285958 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-638421" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.175621  285958 pod_ready.go:93] pod "kube-apiserver-addons-638421" in "kube-system" namespace has status "Ready":"True"
	I1105 17:48:24.175643  285958 pod_ready.go:82] duration metric: took 5.588186ms for pod "kube-apiserver-addons-638421" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.175656  285958 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-638421" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.181204  285958 pod_ready.go:93] pod "kube-controller-manager-addons-638421" in "kube-system" namespace has status "Ready":"True"
	I1105 17:48:24.181225  285958 pod_ready.go:82] duration metric: took 5.560888ms for pod "kube-controller-manager-addons-638421" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.181240  285958 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rjktl" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.218573  285958 pod_ready.go:93] pod "kube-proxy-rjktl" in "kube-system" namespace has status "Ready":"True"
	I1105 17:48:24.218596  285958 pod_ready.go:82] duration metric: took 37.349287ms for pod "kube-proxy-rjktl" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.218609  285958 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-638421" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.329052  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:24.441381  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:24.510564  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:24.512309  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:24.617858  285958 pod_ready.go:93] pod "kube-scheduler-addons-638421" in "kube-system" namespace has status "Ready":"True"
	I1105 17:48:24.617890  285958 pod_ready.go:82] duration metric: took 399.27329ms for pod "kube-scheduler-addons-638421" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.617903  285958 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace to be "Ready" ...
	I1105 17:48:24.828967  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:24.940551  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:25.012060  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:25.012446  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:25.329031  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:25.440958  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:25.511879  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:25.515141  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:25.829751  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:25.941570  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:26.014944  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:26.015772  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:26.330313  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:26.441337  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:26.516239  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:26.522179  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:26.624678  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:26.828264  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:26.940798  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:27.012426  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:27.014296  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:27.332926  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:27.441277  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:27.511561  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:27.512858  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:27.829059  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:27.941679  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:28.011998  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:28.015644  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:28.329782  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:28.441893  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:28.513553  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:28.515812  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:28.625263  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:28.829277  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:28.941310  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:29.013458  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:29.015724  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:29.329275  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:29.441060  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:29.512257  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:29.513393  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:29.829778  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:29.941705  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:30.044735  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:30.046485  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:30.330138  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:30.441666  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:30.514398  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:30.517837  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:30.830025  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:30.941780  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:31.012458  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:31.015026  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:31.124583  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:31.330028  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:31.448295  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:31.512037  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:31.512655  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:31.828360  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:31.940015  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:32.011467  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:32.012685  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:32.329280  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:32.440451  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:32.510758  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:32.513174  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:32.829390  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:32.940717  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:33.011926  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:33.013372  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:33.132890  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:33.330399  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:33.440902  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:33.512648  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:33.514792  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:33.829964  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:33.941161  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:34.014174  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:34.016690  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:34.331103  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:34.441283  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:34.514114  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:34.516630  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:34.829471  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:34.941615  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:35.014143  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:35.015543  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:35.331020  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:35.443332  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:35.514057  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:35.522056  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:35.624569  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:35.830672  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:35.941785  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:36.013341  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:36.015550  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:36.331397  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:36.445796  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:36.512191  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:36.514760  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:36.829942  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:36.940857  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:37.011915  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:37.013293  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:37.329305  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:37.441811  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:37.513006  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:37.514044  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:37.624671  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:37.833082  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:37.942301  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:38.011404  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:38.013727  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:38.328831  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:38.440840  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:38.512715  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:38.513352  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:38.829724  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:38.942235  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:39.013528  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:39.015356  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:39.329635  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:39.441623  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:39.512957  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:39.515737  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:39.625524  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:39.831491  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:39.941118  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:40.015616  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:40.017737  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:40.329963  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:40.440957  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:40.518012  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:40.520694  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:40.829382  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:40.940989  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:41.010637  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:41.014826  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:41.334993  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:41.469161  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:41.570672  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:41.572012  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:41.627249  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:41.829487  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:41.940311  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:42.043652  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:42.044314  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:42.329012  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:42.441455  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:42.510325  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:42.512192  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:42.828813  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:42.940600  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:43.026584  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:43.028721  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:43.328554  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:43.441252  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:43.517514  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:43.518750  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:43.829138  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:43.957803  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:44.013741  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:44.015073  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:44.125777  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:44.329226  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:44.441256  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:44.513020  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:44.514858  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:44.830089  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:44.941004  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:45.011185  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:45.013204  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:45.328627  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:45.440657  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:45.514860  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:45.515795  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:45.829785  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:45.954806  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:46.029756  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:46.030133  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:46.333377  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:46.462710  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:46.511299  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:46.512889  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:46.624398  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:46.828710  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:46.940721  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:47.012135  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:47.013129  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:47.329031  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:47.441531  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:47.512452  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:47.514409  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:47.839907  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:47.940476  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:48.012201  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:48.014457  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:48.329592  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:48.442166  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:48.513561  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:48.514398  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:48.628570  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:48.829527  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:48.940185  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:49.011436  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:49.012187  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:49.329146  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:49.446800  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:49.545327  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:49.545681  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:49.829696  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:49.940824  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:50.012811  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:50.014030  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:50.330479  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:50.441292  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:50.511125  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:50.513491  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:50.829096  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:50.941148  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:51.014969  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:51.017078  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:51.125441  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:51.329804  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:51.441428  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:51.510572  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:51.511909  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:51.828686  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:51.941614  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:52.012352  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:52.013613  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:52.328913  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:52.441002  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:52.511651  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:52.512560  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:52.828764  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:52.940511  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:53.010657  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:53.012374  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:53.330422  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:53.440477  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:53.511233  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:53.512816  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:53.624228  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:53.829520  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:53.941051  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:54.013923  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:54.016929  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:54.329122  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:54.441487  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:54.522254  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:54.524417  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:54.829162  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:54.941529  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:55.016075  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:55.018402  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:55.329913  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:55.441010  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:55.513324  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:55.514369  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:55.625034  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:55.829913  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:55.940398  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:56.012919  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:56.014118  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:56.329407  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:56.441231  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:56.513277  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:56.515801  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:56.828652  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:56.941194  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:57.013327  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:57.014895  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:57.330886  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:57.441235  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:57.511603  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:57.513995  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:57.625752  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:48:57.830132  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:57.941185  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:58.013635  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:58.015335  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:58.329044  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:58.441291  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:58.512862  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:58.515091  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:58.829446  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:58.941268  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:59.013738  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:59.015319  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:59.330070  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:59.440993  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:48:59.515627  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:48:59.516198  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:48:59.829607  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:48:59.941235  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:00.030341  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:00.031926  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:00.130848  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:00.329838  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:00.440566  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:00.512718  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:00.514384  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:00.829282  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:00.941435  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:01.012452  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:01.013661  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:01.330137  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:01.440773  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:01.511655  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:01.517422  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:01.830949  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:01.941279  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:02.012467  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:02.015097  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:02.333414  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:02.441811  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:02.513194  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:02.514824  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:02.625962  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:02.829068  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:02.941133  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:03.012010  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:03.014279  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:03.330388  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:03.443420  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:03.511201  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:03.513399  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:03.830112  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:03.940814  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:04.012535  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:04.014753  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:04.329465  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:04.441214  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:04.541933  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:04.543932  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:04.828727  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:04.940818  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:05.011318  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:05.012946  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:05.126597  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:05.328796  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:05.440421  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:05.512227  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:05.513539  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:05.829073  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:05.940177  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:06.011512  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:06.013406  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:06.329560  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:06.440863  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:06.511296  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:06.513093  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:06.829431  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:06.940664  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:07.011311  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:07.013244  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:07.331507  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:07.441043  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:07.513595  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:07.514826  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:07.625143  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:07.830835  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:07.941737  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:08.020798  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:08.022782  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:08.330310  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:08.450055  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:08.511287  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:08.512009  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:08.829433  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:08.940774  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:09.011522  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:09.013225  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:49:09.330420  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:09.440072  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:09.511855  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:09.516591  285958 kapi.go:107] duration metric: took 1m26.508396466s to wait for kubernetes.io/minikube-addons=registry ...
	I1105 17:49:09.627033  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:09.829714  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:09.940773  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:10.011476  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:10.329961  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:10.441095  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:10.510696  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:10.828013  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:10.940405  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:11.010956  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:11.328962  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:11.443384  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:11.512373  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:11.830137  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:11.941709  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:12.011282  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:12.125262  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:12.341700  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:12.441474  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:12.511526  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:12.842462  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:12.941042  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:13.011362  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:13.330382  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:13.441517  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:13.511927  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:13.829887  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:13.941392  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:14.021656  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:14.126633  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:14.328981  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:14.440251  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:14.510853  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:14.828673  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:14.940747  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:15.011888  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:15.329548  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:15.441165  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:15.511656  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:15.829876  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:15.941932  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:16.012537  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:16.127190  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:16.329522  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:16.441370  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:16.510834  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:16.829174  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:16.942258  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:17.012562  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:17.329307  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:17.441836  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:17.511233  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:17.829563  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:17.942064  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:18.011964  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:18.328670  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:18.441557  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:18.511082  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:18.626188  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:18.829154  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:18.940766  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:19.011557  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:19.330040  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:19.440876  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:19.510988  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:19.830217  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:19.941676  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:20.043189  285958 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:49:20.328914  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:20.441831  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:20.511075  285958 kapi.go:107] duration metric: took 1m37.504610248s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1105 17:49:20.627838  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:20.834509  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:20.941083  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:21.328962  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:21.477208  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:21.834316  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:21.943517  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:22.328957  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:22.441604  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:22.829454  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:22.940784  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:23.125409  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:23.328810  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:23.441499  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:23.830180  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:23.940645  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:24.330052  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:24.448240  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:24.852432  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:24.941396  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:25.125853  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:25.330361  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:25.441494  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:25.828681  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:25.940867  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:26.329073  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:26.440422  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:26.829454  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:26.940980  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:27.126393  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:27.331898  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:27.441163  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:27.839656  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:27.940502  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:28.329665  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:28.441932  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:28.828532  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:28.941028  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:49:29.329733  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:29.441536  285958 kapi.go:107] duration metric: took 1m41.504466783s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1105 17:49:29.444196  285958 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-638421 cluster.
	I1105 17:49:29.446837  285958 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1105 17:49:29.449465  285958 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1105 17:49:29.627083  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:29.830150  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:30.329648  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:30.829574  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:31.328954  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:31.828477  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:32.130733  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:32.330699  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:32.829287  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:33.329333  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:33.832213  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:34.329396  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:34.625644  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:34.839454  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:35.328801  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:35.828827  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:36.330234  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:36.632072  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:36.830884  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:37.334121  285958 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:49:37.830079  285958 kapi.go:107] duration metric: took 1m54.506199764s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1105 17:49:37.833001  285958 out.go:177] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, ingress-dns, inspektor-gadget, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1105 17:49:37.835695  285958 addons.go:510] duration metric: took 2m1.841406106s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner nvidia-device-plugin ingress-dns inspektor-gadget storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1105 17:49:39.124531  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:41.124722  285958 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"False"
	I1105 17:49:43.624811  285958 pod_ready.go:93] pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace has status "Ready":"True"
	I1105 17:49:43.624841  285958 pod_ready.go:82] duration metric: took 1m19.006929434s for pod "metrics-server-84c5f94fbc-jnqlj" in "kube-system" namespace to be "Ready" ...
	I1105 17:49:43.624856  285958 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-sms7j" in "kube-system" namespace to be "Ready" ...
	I1105 17:49:43.630303  285958 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-sms7j" in "kube-system" namespace has status "Ready":"True"
	I1105 17:49:43.630331  285958 pod_ready.go:82] duration metric: took 5.466488ms for pod "nvidia-device-plugin-daemonset-sms7j" in "kube-system" namespace to be "Ready" ...
	I1105 17:49:43.630357  285958 pod_ready.go:39] duration metric: took 1m20.996057398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 17:49:43.630373  285958 api_server.go:52] waiting for apiserver process to appear ...
	I1105 17:49:43.630406  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 17:49:43.630471  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 17:49:43.690319  285958 cri.go:89] found id: "0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a"
	I1105 17:49:43.690387  285958 cri.go:89] found id: ""
	I1105 17:49:43.690402  285958 logs.go:282] 1 containers: [0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a]
	I1105 17:49:43.690459  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:43.694572  285958 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 17:49:43.694687  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 17:49:43.736655  285958 cri.go:89] found id: "5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0"
	I1105 17:49:43.736681  285958 cri.go:89] found id: ""
	I1105 17:49:43.736690  285958 logs.go:282] 1 containers: [5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0]
	I1105 17:49:43.736750  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:43.740097  285958 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 17:49:43.740224  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 17:49:43.777683  285958 cri.go:89] found id: "cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f"
	I1105 17:49:43.777705  285958 cri.go:89] found id: ""
	I1105 17:49:43.777713  285958 logs.go:282] 1 containers: [cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f]
	I1105 17:49:43.777767  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:43.781205  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 17:49:43.781276  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 17:49:43.826877  285958 cri.go:89] found id: "c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c"
	I1105 17:49:43.826900  285958 cri.go:89] found id: ""
	I1105 17:49:43.826909  285958 logs.go:282] 1 containers: [c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c]
	I1105 17:49:43.826986  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:43.830523  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 17:49:43.830611  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 17:49:43.875899  285958 cri.go:89] found id: "4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f"
	I1105 17:49:43.875980  285958 cri.go:89] found id: ""
	I1105 17:49:43.876003  285958 logs.go:282] 1 containers: [4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f]
	I1105 17:49:43.876093  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:43.879686  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 17:49:43.879780  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 17:49:43.918382  285958 cri.go:89] found id: "bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0"
	I1105 17:49:43.918412  285958 cri.go:89] found id: ""
	I1105 17:49:43.918426  285958 logs.go:282] 1 containers: [bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0]
	I1105 17:49:43.918489  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:43.921996  285958 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 17:49:43.922068  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 17:49:43.960184  285958 cri.go:89] found id: "1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952"
	I1105 17:49:43.960208  285958 cri.go:89] found id: ""
	I1105 17:49:43.960217  285958 logs.go:282] 1 containers: [1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952]
	I1105 17:49:43.960274  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:43.963882  285958 logs.go:123] Gathering logs for kube-scheduler [c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c] ...
	I1105 17:49:43.963908  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c"
	I1105 17:49:44.027704  285958 logs.go:123] Gathering logs for kube-proxy [4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f] ...
	I1105 17:49:44.027735  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f"
	I1105 17:49:44.076175  285958 logs.go:123] Gathering logs for kindnet [1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952] ...
	I1105 17:49:44.076246  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952"
	I1105 17:49:44.115998  285958 logs.go:123] Gathering logs for kubelet ...
	I1105 17:49:44.116032  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1105 17:49:44.177532  285958 logs.go:138] Found kubelet problem: Nov 05 17:47:36 addons-638421 kubelet[1497]: W1105 17:47:36.418169    1497 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-638421' and this object
	W1105 17:49:44.177768  285958 logs.go:138] Found kubelet problem: Nov 05 17:47:36 addons-638421 kubelet[1497]: E1105 17:47:36.418217    1497 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	W1105 17:49:44.204041  285958 logs.go:138] Found kubelet problem: Nov 05 17:48:22 addons-638421 kubelet[1497]: W1105 17:48:22.583617    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-638421' and this object
	W1105 17:49:44.204282  285958 logs.go:138] Found kubelet problem: Nov 05 17:48:22 addons-638421 kubelet[1497]: E1105 17:48:22.583667    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	I1105 17:49:44.237909  285958 logs.go:123] Gathering logs for dmesg ...
	I1105 17:49:44.237943  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 17:49:44.257082  285958 logs.go:123] Gathering logs for describe nodes ...
	I1105 17:49:44.257111  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 17:49:44.434838  285958 logs.go:123] Gathering logs for coredns [cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f] ...
	I1105 17:49:44.434871  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f"
	I1105 17:49:44.485089  285958 logs.go:123] Gathering logs for container status ...
	I1105 17:49:44.485120  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 17:49:44.535847  285958 logs.go:123] Gathering logs for kube-apiserver [0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a] ...
	I1105 17:49:44.535878  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a"
	I1105 17:49:44.613138  285958 logs.go:123] Gathering logs for etcd [5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0] ...
	I1105 17:49:44.613173  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0"
	I1105 17:49:44.666981  285958 logs.go:123] Gathering logs for kube-controller-manager [bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0] ...
	I1105 17:49:44.667014  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0"
	I1105 17:49:44.746061  285958 logs.go:123] Gathering logs for CRI-O ...
	I1105 17:49:44.746097  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 17:49:44.844511  285958 out.go:358] Setting ErrFile to fd 2...
	I1105 17:49:44.844547  285958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1105 17:49:44.844615  285958 out.go:270] X Problems detected in kubelet:
	W1105 17:49:44.844627  285958 out.go:270]   Nov 05 17:47:36 addons-638421 kubelet[1497]: W1105 17:47:36.418169    1497 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-638421' and this object
	W1105 17:49:44.844638  285958 out.go:270]   Nov 05 17:47:36 addons-638421 kubelet[1497]: E1105 17:47:36.418217    1497 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	W1105 17:49:44.844645  285958 out.go:270]   Nov 05 17:48:22 addons-638421 kubelet[1497]: W1105 17:48:22.583617    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-638421' and this object
	W1105 17:49:44.844652  285958 out.go:270]   Nov 05 17:48:22 addons-638421 kubelet[1497]: E1105 17:48:22.583667    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	I1105 17:49:44.844664  285958 out.go:358] Setting ErrFile to fd 2...
	I1105 17:49:44.844671  285958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:49:54.846654  285958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 17:49:54.863106  285958 api_server.go:72] duration metric: took 2m18.86965892s to wait for apiserver process to appear ...
	I1105 17:49:54.863135  285958 api_server.go:88] waiting for apiserver healthz status ...
	I1105 17:49:54.863174  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 17:49:54.863237  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 17:49:54.900477  285958 cri.go:89] found id: "0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a"
	I1105 17:49:54.900497  285958 cri.go:89] found id: ""
	I1105 17:49:54.900505  285958 logs.go:282] 1 containers: [0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a]
	I1105 17:49:54.900560  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:54.903993  285958 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 17:49:54.904060  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 17:49:54.941184  285958 cri.go:89] found id: "5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0"
	I1105 17:49:54.941208  285958 cri.go:89] found id: ""
	I1105 17:49:54.941217  285958 logs.go:282] 1 containers: [5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0]
	I1105 17:49:54.941272  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:54.944714  285958 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 17:49:54.944788  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 17:49:54.986856  285958 cri.go:89] found id: "cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f"
	I1105 17:49:54.986880  285958 cri.go:89] found id: ""
	I1105 17:49:54.986888  285958 logs.go:282] 1 containers: [cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f]
	I1105 17:49:54.986947  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:54.990528  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 17:49:54.990606  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 17:49:55.030564  285958 cri.go:89] found id: "c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c"
	I1105 17:49:55.030589  285958 cri.go:89] found id: ""
	I1105 17:49:55.030643  285958 logs.go:282] 1 containers: [c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c]
	I1105 17:49:55.030720  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:55.034564  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 17:49:55.034654  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 17:49:55.075092  285958 cri.go:89] found id: "4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f"
	I1105 17:49:55.075117  285958 cri.go:89] found id: ""
	I1105 17:49:55.075126  285958 logs.go:282] 1 containers: [4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f]
	I1105 17:49:55.075184  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:55.078865  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 17:49:55.078940  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 17:49:55.118682  285958 cri.go:89] found id: "bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0"
	I1105 17:49:55.118705  285958 cri.go:89] found id: ""
	I1105 17:49:55.118714  285958 logs.go:282] 1 containers: [bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0]
	I1105 17:49:55.118769  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:55.122390  285958 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 17:49:55.122468  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 17:49:55.159824  285958 cri.go:89] found id: "1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952"
	I1105 17:49:55.159847  285958 cri.go:89] found id: ""
	I1105 17:49:55.159856  285958 logs.go:282] 1 containers: [1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952]
	I1105 17:49:55.159915  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:49:55.163457  285958 logs.go:123] Gathering logs for dmesg ...
	I1105 17:49:55.163483  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 17:49:55.179445  285958 logs.go:123] Gathering logs for etcd [5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0] ...
	I1105 17:49:55.179473  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0"
	I1105 17:49:55.234825  285958 logs.go:123] Gathering logs for kube-proxy [4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f] ...
	I1105 17:49:55.234859  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f"
	I1105 17:49:55.276024  285958 logs.go:123] Gathering logs for kindnet [1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952] ...
	I1105 17:49:55.276050  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952"
	I1105 17:49:55.320748  285958 logs.go:123] Gathering logs for CRI-O ...
	I1105 17:49:55.320777  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 17:49:55.415558  285958 logs.go:123] Gathering logs for container status ...
	I1105 17:49:55.415597  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 17:49:55.471921  285958 logs.go:123] Gathering logs for kubelet ...
	I1105 17:49:55.471959  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1105 17:49:55.531460  285958 logs.go:138] Found kubelet problem: Nov 05 17:47:36 addons-638421 kubelet[1497]: W1105 17:47:36.418169    1497 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-638421' and this object
	W1105 17:49:55.531692  285958 logs.go:138] Found kubelet problem: Nov 05 17:47:36 addons-638421 kubelet[1497]: E1105 17:47:36.418217    1497 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	W1105 17:49:55.557830  285958 logs.go:138] Found kubelet problem: Nov 05 17:48:22 addons-638421 kubelet[1497]: W1105 17:48:22.583617    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-638421' and this object
	W1105 17:49:55.558064  285958 logs.go:138] Found kubelet problem: Nov 05 17:48:22 addons-638421 kubelet[1497]: E1105 17:48:22.583667    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	I1105 17:49:55.592228  285958 logs.go:123] Gathering logs for describe nodes ...
	I1105 17:49:55.592254  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 17:49:55.723278  285958 logs.go:123] Gathering logs for kube-apiserver [0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a] ...
	I1105 17:49:55.723315  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a"
	I1105 17:49:55.791344  285958 logs.go:123] Gathering logs for coredns [cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f] ...
	I1105 17:49:55.791376  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f"
	I1105 17:49:55.832101  285958 logs.go:123] Gathering logs for kube-scheduler [c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c] ...
	I1105 17:49:55.832130  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c"
	I1105 17:49:55.874288  285958 logs.go:123] Gathering logs for kube-controller-manager [bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0] ...
	I1105 17:49:55.874323  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0"
	I1105 17:49:55.942310  285958 out.go:358] Setting ErrFile to fd 2...
	I1105 17:49:55.942342  285958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1105 17:49:55.942407  285958 out.go:270] X Problems detected in kubelet:
	W1105 17:49:55.942418  285958 out.go:270]   Nov 05 17:47:36 addons-638421 kubelet[1497]: W1105 17:47:36.418169    1497 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-638421' and this object
	W1105 17:49:55.942427  285958 out.go:270]   Nov 05 17:47:36 addons-638421 kubelet[1497]: E1105 17:47:36.418217    1497 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	W1105 17:49:55.942443  285958 out.go:270]   Nov 05 17:48:22 addons-638421 kubelet[1497]: W1105 17:48:22.583617    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-638421' and this object
	W1105 17:49:55.942452  285958 out.go:270]   Nov 05 17:48:22 addons-638421 kubelet[1497]: E1105 17:48:22.583667    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	I1105 17:49:55.942465  285958 out.go:358] Setting ErrFile to fd 2...
	I1105 17:49:55.942472  285958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:50:05.943997  285958 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 17:50:05.954316  285958 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1105 17:50:05.955359  285958 api_server.go:141] control plane version: v1.31.2
	I1105 17:50:05.955389  285958 api_server.go:131] duration metric: took 11.092246489s to wait for apiserver health ...
	I1105 17:50:05.955399  285958 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 17:50:05.955422  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 17:50:05.955486  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 17:50:05.995851  285958 cri.go:89] found id: "0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a"
	I1105 17:50:05.995873  285958 cri.go:89] found id: ""
	I1105 17:50:05.995882  285958 logs.go:282] 1 containers: [0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a]
	I1105 17:50:05.995938  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:50:05.999482  285958 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 17:50:05.999567  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 17:50:06.038315  285958 cri.go:89] found id: "5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0"
	I1105 17:50:06.038338  285958 cri.go:89] found id: ""
	I1105 17:50:06.038347  285958 logs.go:282] 1 containers: [5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0]
	I1105 17:50:06.038404  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:50:06.041930  285958 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 17:50:06.042048  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 17:50:06.085559  285958 cri.go:89] found id: "cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f"
	I1105 17:50:06.085582  285958 cri.go:89] found id: ""
	I1105 17:50:06.085591  285958 logs.go:282] 1 containers: [cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f]
	I1105 17:50:06.085649  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:50:06.089348  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 17:50:06.089419  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 17:50:06.128403  285958 cri.go:89] found id: "c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c"
	I1105 17:50:06.128428  285958 cri.go:89] found id: ""
	I1105 17:50:06.128436  285958 logs.go:282] 1 containers: [c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c]
	I1105 17:50:06.128501  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:50:06.132326  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 17:50:06.132406  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 17:50:06.175706  285958 cri.go:89] found id: "4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f"
	I1105 17:50:06.175729  285958 cri.go:89] found id: ""
	I1105 17:50:06.175737  285958 logs.go:282] 1 containers: [4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f]
	I1105 17:50:06.175793  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:50:06.179314  285958 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 17:50:06.179388  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 17:50:06.222170  285958 cri.go:89] found id: "bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0"
	I1105 17:50:06.222193  285958 cri.go:89] found id: ""
	I1105 17:50:06.222202  285958 logs.go:282] 1 containers: [bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0]
	I1105 17:50:06.222258  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:50:06.225638  285958 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 17:50:06.225708  285958 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 17:50:06.269890  285958 cri.go:89] found id: "1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952"
	I1105 17:50:06.269912  285958 cri.go:89] found id: ""
	I1105 17:50:06.269920  285958 logs.go:282] 1 containers: [1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952]
	I1105 17:50:06.269978  285958 ssh_runner.go:195] Run: which crictl
	I1105 17:50:06.273628  285958 logs.go:123] Gathering logs for coredns [cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f] ...
	I1105 17:50:06.273657  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f"
	I1105 17:50:06.312988  285958 logs.go:123] Gathering logs for kube-scheduler [c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c] ...
	I1105 17:50:06.313022  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c"
	I1105 17:50:06.364918  285958 logs.go:123] Gathering logs for kube-controller-manager [bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0] ...
	I1105 17:50:06.364951  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0"
	I1105 17:50:06.434251  285958 logs.go:123] Gathering logs for kindnet [1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952] ...
	I1105 17:50:06.434291  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952"
	I1105 17:50:06.473656  285958 logs.go:123] Gathering logs for CRI-O ...
	I1105 17:50:06.473685  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 17:50:06.572390  285958 logs.go:123] Gathering logs for container status ...
	I1105 17:50:06.572427  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 17:50:06.640943  285958 logs.go:123] Gathering logs for describe nodes ...
	I1105 17:50:06.640977  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 17:50:06.797333  285958 logs.go:123] Gathering logs for kube-apiserver [0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a] ...
	I1105 17:50:06.797369  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a"
	I1105 17:50:06.850355  285958 logs.go:123] Gathering logs for etcd [5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0] ...
	I1105 17:50:06.850387  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0"
	I1105 17:50:06.910877  285958 logs.go:123] Gathering logs for kube-proxy [4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f] ...
	I1105 17:50:06.910910  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f"
	I1105 17:50:06.949712  285958 logs.go:123] Gathering logs for kubelet ...
	I1105 17:50:06.949741  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1105 17:50:07.003517  285958 logs.go:138] Found kubelet problem: Nov 05 17:47:36 addons-638421 kubelet[1497]: W1105 17:47:36.418169    1497 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-638421' and this object
	W1105 17:50:07.003756  285958 logs.go:138] Found kubelet problem: Nov 05 17:47:36 addons-638421 kubelet[1497]: E1105 17:47:36.418217    1497 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	W1105 17:50:07.030068  285958 logs.go:138] Found kubelet problem: Nov 05 17:48:22 addons-638421 kubelet[1497]: W1105 17:48:22.583617    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-638421' and this object
	W1105 17:50:07.030310  285958 logs.go:138] Found kubelet problem: Nov 05 17:48:22 addons-638421 kubelet[1497]: E1105 17:48:22.583667    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	I1105 17:50:07.065433  285958 logs.go:123] Gathering logs for dmesg ...
	I1105 17:50:07.065462  285958 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 17:50:07.082079  285958 out.go:358] Setting ErrFile to fd 2...
	I1105 17:50:07.082103  285958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1105 17:50:07.082154  285958 out.go:270] X Problems detected in kubelet:
	W1105 17:50:07.082171  285958 out.go:270]   Nov 05 17:47:36 addons-638421 kubelet[1497]: W1105 17:47:36.418169    1497 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-638421' and this object
	W1105 17:50:07.082178  285958 out.go:270]   Nov 05 17:47:36 addons-638421 kubelet[1497]: E1105 17:47:36.418217    1497 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	W1105 17:50:07.082186  285958 out.go:270]   Nov 05 17:48:22 addons-638421 kubelet[1497]: W1105 17:48:22.583617    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-638421" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-638421' and this object
	W1105 17:50:07.082202  285958 out.go:270]   Nov 05 17:48:22 addons-638421 kubelet[1497]: E1105 17:48:22.583667    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-638421\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-638421' and this object" logger="UnhandledError"
	I1105 17:50:07.082208  285958 out.go:358] Setting ErrFile to fd 2...
	I1105 17:50:07.082214  285958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:50:17.093804  285958 system_pods.go:59] 18 kube-system pods found
	I1105 17:50:17.093852  285958 system_pods.go:61] "coredns-7c65d6cfc9-fc54b" [2ddf511d-7116-4d85-9a47-69451cf3567b] Running
	I1105 17:50:17.093859  285958 system_pods.go:61] "csi-hostpath-attacher-0" [0a836e59-ab7e-4299-9fa1-58898352e6e1] Running
	I1105 17:50:17.093864  285958 system_pods.go:61] "csi-hostpath-resizer-0" [9e2fd9dc-0b28-4b24-af35-169834609626] Running
	I1105 17:50:17.093868  285958 system_pods.go:61] "csi-hostpathplugin-spl7f" [302e097c-b1e5-4a6e-8974-ed54ac3622a7] Running
	I1105 17:50:17.093874  285958 system_pods.go:61] "etcd-addons-638421" [a4272f93-3c10-41e4-aa9c-d92d18e93912] Running
	I1105 17:50:17.093879  285958 system_pods.go:61] "kindnet-mgcb7" [edefae1d-4f88-4e94-a3f8-881d352214d7] Running
	I1105 17:50:17.093884  285958 system_pods.go:61] "kube-apiserver-addons-638421" [1823851c-e4ac-418b-806e-ec449280ed27] Running
	I1105 17:50:17.093922  285958 system_pods.go:61] "kube-controller-manager-addons-638421" [4cc07926-753f-4483-ac98-15581396a5bb] Running
	I1105 17:50:17.093934  285958 system_pods.go:61] "kube-ingress-dns-minikube" [347ca6ec-8068-4243-80fc-ec6e6a0eeb64] Running
	I1105 17:50:17.093938  285958 system_pods.go:61] "kube-proxy-rjktl" [d984a2cc-7426-4044-8f13-9082c887bda6] Running
	I1105 17:50:17.093942  285958 system_pods.go:61] "kube-scheduler-addons-638421" [1c971a68-3594-46ce-858f-59234800648b] Running
	I1105 17:50:17.093946  285958 system_pods.go:61] "metrics-server-84c5f94fbc-jnqlj" [d43aacca-7261-4530-9a58-1456060cb884] Running
	I1105 17:50:17.093949  285958 system_pods.go:61] "nvidia-device-plugin-daemonset-sms7j" [618e6ceb-8422-465e-9951-05b2b10ce4b0] Running
	I1105 17:50:17.093954  285958 system_pods.go:61] "registry-66c9cd494c-xl46f" [fc8d5d2f-faa3-4f66-b3c1-dac5435a86e5] Running
	I1105 17:50:17.093962  285958 system_pods.go:61] "registry-proxy-2jjl8" [c9892084-3bb9-41d8-b4e5-856524765e94] Running
	I1105 17:50:17.093966  285958 system_pods.go:61] "snapshot-controller-56fcc65765-4tgkv" [e740a30d-66d7-484d-ab45-50d3d0206cfc] Running
	I1105 17:50:17.093970  285958 system_pods.go:61] "snapshot-controller-56fcc65765-ljxfj" [2d3f31f4-59d7-4584-ac5d-6fe0246e99fa] Running
	I1105 17:50:17.093974  285958 system_pods.go:61] "storage-provisioner" [258ce47e-4fa4-4230-9eef-22ee33056db8] Running
	I1105 17:50:17.093980  285958 system_pods.go:74] duration metric: took 11.138573679s to wait for pod list to return data ...
	I1105 17:50:17.093992  285958 default_sa.go:34] waiting for default service account to be created ...
	I1105 17:50:17.096627  285958 default_sa.go:45] found service account: "default"
	I1105 17:50:17.096655  285958 default_sa.go:55] duration metric: took 2.656727ms for default service account to be created ...
	I1105 17:50:17.096664  285958 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 17:50:17.107083  285958 system_pods.go:86] 18 kube-system pods found
	I1105 17:50:17.107116  285958 system_pods.go:89] "coredns-7c65d6cfc9-fc54b" [2ddf511d-7116-4d85-9a47-69451cf3567b] Running
	I1105 17:50:17.107125  285958 system_pods.go:89] "csi-hostpath-attacher-0" [0a836e59-ab7e-4299-9fa1-58898352e6e1] Running
	I1105 17:50:17.107130  285958 system_pods.go:89] "csi-hostpath-resizer-0" [9e2fd9dc-0b28-4b24-af35-169834609626] Running
	I1105 17:50:17.107134  285958 system_pods.go:89] "csi-hostpathplugin-spl7f" [302e097c-b1e5-4a6e-8974-ed54ac3622a7] Running
	I1105 17:50:17.107140  285958 system_pods.go:89] "etcd-addons-638421" [a4272f93-3c10-41e4-aa9c-d92d18e93912] Running
	I1105 17:50:17.107144  285958 system_pods.go:89] "kindnet-mgcb7" [edefae1d-4f88-4e94-a3f8-881d352214d7] Running
	I1105 17:50:17.107149  285958 system_pods.go:89] "kube-apiserver-addons-638421" [1823851c-e4ac-418b-806e-ec449280ed27] Running
	I1105 17:50:17.107153  285958 system_pods.go:89] "kube-controller-manager-addons-638421" [4cc07926-753f-4483-ac98-15581396a5bb] Running
	I1105 17:50:17.107158  285958 system_pods.go:89] "kube-ingress-dns-minikube" [347ca6ec-8068-4243-80fc-ec6e6a0eeb64] Running
	I1105 17:50:17.107164  285958 system_pods.go:89] "kube-proxy-rjktl" [d984a2cc-7426-4044-8f13-9082c887bda6] Running
	I1105 17:50:17.107168  285958 system_pods.go:89] "kube-scheduler-addons-638421" [1c971a68-3594-46ce-858f-59234800648b] Running
	I1105 17:50:17.107173  285958 system_pods.go:89] "metrics-server-84c5f94fbc-jnqlj" [d43aacca-7261-4530-9a58-1456060cb884] Running
	I1105 17:50:17.107181  285958 system_pods.go:89] "nvidia-device-plugin-daemonset-sms7j" [618e6ceb-8422-465e-9951-05b2b10ce4b0] Running
	I1105 17:50:17.107188  285958 system_pods.go:89] "registry-66c9cd494c-xl46f" [fc8d5d2f-faa3-4f66-b3c1-dac5435a86e5] Running
	I1105 17:50:17.107201  285958 system_pods.go:89] "registry-proxy-2jjl8" [c9892084-3bb9-41d8-b4e5-856524765e94] Running
	I1105 17:50:17.107205  285958 system_pods.go:89] "snapshot-controller-56fcc65765-4tgkv" [e740a30d-66d7-484d-ab45-50d3d0206cfc] Running
	I1105 17:50:17.107210  285958 system_pods.go:89] "snapshot-controller-56fcc65765-ljxfj" [2d3f31f4-59d7-4584-ac5d-6fe0246e99fa] Running
	I1105 17:50:17.107214  285958 system_pods.go:89] "storage-provisioner" [258ce47e-4fa4-4230-9eef-22ee33056db8] Running
	I1105 17:50:17.107224  285958 system_pods.go:126] duration metric: took 10.55376ms to wait for k8s-apps to be running ...
	I1105 17:50:17.107237  285958 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 17:50:17.107297  285958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 17:50:17.118986  285958 system_svc.go:56] duration metric: took 11.739161ms WaitForService to wait for kubelet
	I1105 17:50:17.119017  285958 kubeadm.go:582] duration metric: took 2m41.125574834s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 17:50:17.119038  285958 node_conditions.go:102] verifying NodePressure condition ...
	I1105 17:50:17.122991  285958 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1105 17:50:17.123029  285958 node_conditions.go:123] node cpu capacity is 2
	I1105 17:50:17.123051  285958 node_conditions.go:105] duration metric: took 3.984266ms to run NodePressure ...
	I1105 17:50:17.123065  285958 start.go:241] waiting for startup goroutines ...
	I1105 17:50:17.123072  285958 start.go:246] waiting for cluster config update ...
	I1105 17:50:17.123095  285958 start.go:255] writing updated cluster config ...
	I1105 17:50:17.123411  285958 ssh_runner.go:195] Run: rm -f paused
	I1105 17:50:17.477322  285958 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 17:50:17.481893  285958 out.go:177] * Done! kubectl is now configured to use "addons-638421" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.246099657Z" level=info msg="Removed container 41add762017b261bc27f9ddaddcafa942816cf1b9d639094682a93fa1716d04c: local-path-storage/helper-pod-delete-pvc-b3573bff-9dda-4c36-88d8-bc4018837214/helper-pod" id=30f0ffb5-661a-49a0-ac5d-306486c89cd0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.247347738Z" level=info msg="Removing container: 3144822571a0c33e03409c8cb17aac56cbb5dec676e5de2b64f39fbcf7775a3c" id=2e38566c-f76b-457e-a51e-91714d25a141 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.264414597Z" level=info msg="Removed container 3144822571a0c33e03409c8cb17aac56cbb5dec676e5de2b64f39fbcf7775a3c: default/test-local-path/busybox" id=2e38566c-f76b-457e-a51e-91714d25a141 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.265770607Z" level=info msg="Removing container: 03251d61ea8b94384b0a9cac29dcdd5a4a7b69cc4775c3b5b8246a9c8481b664" id=f22bd84d-1274-4449-bd90-85f7c679ffa0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.282890504Z" level=info msg="Removed container 03251d61ea8b94384b0a9cac29dcdd5a4a7b69cc4775c3b5b8246a9c8481b664: local-path-storage/helper-pod-create-pvc-b3573bff-9dda-4c36-88d8-bc4018837214/helper-pod" id=f22bd84d-1274-4449-bd90-85f7c679ffa0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.284235766Z" level=info msg="Stopping pod sandbox: d2a2cdf851181952ff7913eccf1706ed0bd7d31f9f32dc463377226f1a401cad" id=652b880b-ec6d-4196-b804-0499a660824c name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.284276192Z" level=info msg="Stopped pod sandbox (already stopped): d2a2cdf851181952ff7913eccf1706ed0bd7d31f9f32dc463377226f1a401cad" id=652b880b-ec6d-4196-b804-0499a660824c name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.284550383Z" level=info msg="Removing pod sandbox: d2a2cdf851181952ff7913eccf1706ed0bd7d31f9f32dc463377226f1a401cad" id=9a2af377-2842-4843-9f54-921967a1abe2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.294801447Z" level=info msg="Removed pod sandbox: d2a2cdf851181952ff7913eccf1706ed0bd7d31f9f32dc463377226f1a401cad" id=9a2af377-2842-4843-9f54-921967a1abe2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.295357065Z" level=info msg="Stopping pod sandbox: a08132c15fe6d0de9fb17a01f7227cee22f211b8d34d5449fecc26f06380fad8" id=d0b0f949-0c92-4255-9c69-b45fc5578524 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.295391231Z" level=info msg="Stopped pod sandbox (already stopped): a08132c15fe6d0de9fb17a01f7227cee22f211b8d34d5449fecc26f06380fad8" id=d0b0f949-0c92-4255-9c69-b45fc5578524 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.295808363Z" level=info msg="Removing pod sandbox: a08132c15fe6d0de9fb17a01f7227cee22f211b8d34d5449fecc26f06380fad8" id=043eb968-05a5-4f8d-b546-4adfa19eeaba name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.308291670Z" level=info msg="Removed pod sandbox: a08132c15fe6d0de9fb17a01f7227cee22f211b8d34d5449fecc26f06380fad8" id=043eb968-05a5-4f8d-b546-4adfa19eeaba name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.309009963Z" level=info msg="Stopping pod sandbox: 84171d167ef8ec103294adf9eb8cf304cd1b52c1d40d5e99993ec4669262be24" id=fb836eb5-a796-49fa-b0ec-4a4672d39878 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.309134254Z" level=info msg="Stopped pod sandbox (already stopped): 84171d167ef8ec103294adf9eb8cf304cd1b52c1d40d5e99993ec4669262be24" id=fb836eb5-a796-49fa-b0ec-4a4672d39878 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.309534729Z" level=info msg="Removing pod sandbox: 84171d167ef8ec103294adf9eb8cf304cd1b52c1d40d5e99993ec4669262be24" id=42f57d92-103f-44bc-988b-b1c551572de2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.318606611Z" level=info msg="Removed pod sandbox: 84171d167ef8ec103294adf9eb8cf304cd1b52c1d40d5e99993ec4669262be24" id=42f57d92-103f-44bc-988b-b1c551572de2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.319080285Z" level=info msg="Stopping pod sandbox: fe22c59e58e1a4853dae577365d0a14a94a42f8c90952e5c5b662aa55b24f3f8" id=de8ca288-fe0f-495f-8cff-da8130d80142 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.319128342Z" level=info msg="Stopped pod sandbox (already stopped): fe22c59e58e1a4853dae577365d0a14a94a42f8c90952e5c5b662aa55b24f3f8" id=de8ca288-fe0f-495f-8cff-da8130d80142 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.319415677Z" level=info msg="Removing pod sandbox: fe22c59e58e1a4853dae577365d0a14a94a42f8c90952e5c5b662aa55b24f3f8" id=22f2d6f4-c577-4008-b32d-0953e1a478b1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.328542336Z" level=info msg="Removed pod sandbox: fe22c59e58e1a4853dae577365d0a14a94a42f8c90952e5c5b662aa55b24f3f8" id=22f2d6f4-c577-4008-b32d-0953e1a478b1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.329208354Z" level=info msg="Stopping pod sandbox: 296f24038e0426a46f3fa34b564ff56931ba14bc4f8c674f67f2f8546eb88fb9" id=f0850442-b424-4f3b-931a-10a851976f6b name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.329246803Z" level=info msg="Stopped pod sandbox (already stopped): 296f24038e0426a46f3fa34b564ff56931ba14bc4f8c674f67f2f8546eb88fb9" id=f0850442-b424-4f3b-931a-10a851976f6b name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.329537830Z" level=info msg="Removing pod sandbox: 296f24038e0426a46f3fa34b564ff56931ba14bc4f8c674f67f2f8546eb88fb9" id=a2c18367-0e44-4cd5-8937-2f4af34bd1ca name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 05 17:55:31 addons-638421 crio[965]: time="2024-11-05 17:55:31.340037534Z" level=info msg="Removed pod sandbox: 296f24038e0426a46f3fa34b564ff56931ba14bc4f8c674f67f2f8546eb88fb9" id=a2c18367-0e44-4cd5-8937-2f4af34bd1ca name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bb1c82d838bce       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   7c43da301aca8       hello-world-app-55bf9c44b4-vt9rn
	ae864e6292856       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         4 minutes ago       Running             nginx                     0                   2b21644876658       nginx
	38801e9ceca37       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   680bf55362991       busybox
	3f6b6439b98f1       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   7 minutes ago       Running             metrics-server            0                   774723ef7a311       metrics-server-84c5f94fbc-jnqlj
	3b9991fb54cd6       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98        7 minutes ago       Running             local-path-provisioner    0                   1e1e44524e367       local-path-provisioner-86d989889c-fw752
	928fd37a67be8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        7 minutes ago       Running             storage-provisioner       0                   0f074abc0f7ea       storage-provisioner
	cb903a97940ff       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                        7 minutes ago       Running             coredns                   0                   8a5b67615ea90       coredns-7c65d6cfc9-fc54b
	1fd0ca35d5df4       docker.io/kindest/kindnetd@sha256:96156439ac8537499e45fedf68a7cb80f0fbafd77fc4d7a5204d3151cf412450                      8 minutes ago       Running             kindnet-cni               0                   5487a21b0441a       kindnet-mgcb7
	4c604d9201f70       021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba                                                        8 minutes ago       Running             kube-proxy                0                   8efea88ac21a5       kube-proxy-rjktl
	bab636744f5f7       9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba                                                        8 minutes ago       Running             kube-controller-manager   0                   908191a37142d       kube-controller-manager-addons-638421
	0b5b17e046037       f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270                                                        8 minutes ago       Running             kube-apiserver            0                   948e6fc4819ec       kube-apiserver-addons-638421
	c43ffe8529476       d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a                                                        8 minutes ago       Running             kube-scheduler            0                   338974ac07cf8       kube-scheduler-addons-638421
	5a11a95bd109a       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        8 minutes ago       Running             etcd                      0                   cc34723a8caff       etcd-addons-638421
	
	
	==> coredns [cb903a97940ff1b13f8863edc6bcaade2c27ae389cf4bdcfa0f27ec7da00071f] <==
	[INFO] 10.244.0.19:47174 - 62580 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000155823s
	[INFO] 10.244.0.19:47174 - 8864 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002227337s
	[INFO] 10.244.0.19:47697 - 57157 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002277027s
	[INFO] 10.244.0.19:47697 - 29417 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002114369s
	[INFO] 10.244.0.19:47174 - 31116 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002099247s
	[INFO] 10.244.0.19:47697 - 50060 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000182646s
	[INFO] 10.244.0.19:47174 - 10660 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000090191s
	[INFO] 10.244.0.19:57501 - 20978 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000169337s
	[INFO] 10.244.0.19:60638 - 38346 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000067906s
	[INFO] 10.244.0.19:60638 - 37566 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000463565s
	[INFO] 10.244.0.19:57501 - 6289 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000325152s
	[INFO] 10.244.0.19:60638 - 21574 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000085785s
	[INFO] 10.244.0.19:60638 - 54531 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000100874s
	[INFO] 10.244.0.19:57501 - 48843 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000059356s
	[INFO] 10.244.0.19:57501 - 48127 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000070104s
	[INFO] 10.244.0.19:60638 - 59622 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056123s
	[INFO] 10.244.0.19:60638 - 60103 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060635s
	[INFO] 10.244.0.19:57501 - 36621 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005719s
	[INFO] 10.244.0.19:57501 - 4766 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000170863s
	[INFO] 10.244.0.19:60638 - 50534 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001397652s
	[INFO] 10.244.0.19:57501 - 50557 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001304499s
	[INFO] 10.244.0.19:60638 - 33063 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002292986s
	[INFO] 10.244.0.19:60638 - 53024 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075184s
	[INFO] 10.244.0.19:57501 - 52662 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001207998s
	[INFO] 10.244.0.19:57501 - 14561 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000070055s
	
	
	==> describe nodes <==
	Name:               addons-638421
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-638421
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=addons-638421
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T17_47_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-638421
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 17:47:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-638421
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 17:56:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 17:55:11 +0000   Tue, 05 Nov 2024 17:47:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 17:55:11 +0000   Tue, 05 Nov 2024 17:47:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 17:55:11 +0000   Tue, 05 Nov 2024 17:47:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 17:55:11 +0000   Tue, 05 Nov 2024 17:48:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-638421
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 1aa8bab8a0b94fdea88af9bbdf5cb344
	  System UUID:                7313307f-ed44-4709-8a3d-c1f8b80a1e22
	  Boot ID:                    308934a7-38b0-4c4f-b876-76c17d9b7ecd
	  Kernel Version:             5.15.0-1071-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  default                     hello-world-app-55bf9c44b4-vt9rn           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 coredns-7c65d6cfc9-fc54b                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m37s
	  kube-system                 etcd-addons-638421                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m43s
	  kube-system                 kindnet-mgcb7                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m37s
	  kube-system                 kube-apiserver-addons-638421               250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 kube-controller-manager-addons-638421      200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 kube-proxy-rjktl                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 kube-scheduler-addons-638421               100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 metrics-server-84c5f94fbc-jnqlj            100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         8m32s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m32s
	  local-path-storage          local-path-provisioner-86d989889c-fw752    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m31s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  8m50s (x8 over 8m50s)  kubelet          Node addons-638421 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m50s (x8 over 8m50s)  kubelet          Node addons-638421 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m50s (x7 over 8m50s)  kubelet          Node addons-638421 status is now: NodeHasSufficientPID
	  Normal   Starting                 8m43s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m43s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m43s                  kubelet          Node addons-638421 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m43s                  kubelet          Node addons-638421 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m43s                  kubelet          Node addons-638421 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m38s                  node-controller  Node addons-638421 event: Registered Node addons-638421 in Controller
	  Normal   NodeReady                7m51s                  kubelet          Node addons-638421 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 5 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014171] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.476378] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025481] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.031094] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.017133] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.607383] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.934599] kauditd_printk_skb: 34 callbacks suppressed
	[Nov 5 16:44] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 5 17:18] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [5a11a95bd109a8865a9285cdedb9dfa7b613347720b8204b6c932a898ab430b0] <==
	{"level":"warn","ts":"2024-11-05T17:47:37.665114Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.509827ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128033044704398431 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-fc54b\" mod_revision:363 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-fc54b\" value_size:3919 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-fc54b\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-11-05T17:47:37.665302Z","caller":"traceutil/trace.go:171","msg":"trace[1292105864] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"195.115986ms","start":"2024-11-05T17:47:37.470174Z","end":"2024-11-05T17:47:37.665290Z","steps":["trace[1292105864] 'process raft request'  (duration: 195.031187ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:47:37.690663Z","caller":"traceutil/trace.go:171","msg":"trace[1320277466] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"288.969927ms","start":"2024-11-05T17:47:37.399616Z","end":"2024-11-05T17:47:37.688586Z","steps":["trace[1320277466] 'process raft request'  (duration: 57.400873ms)","trace[1320277466] 'compare'  (duration: 199.959296ms)"],"step_count":2}
	{"level":"info","ts":"2024-11-05T17:47:38.069664Z","caller":"traceutil/trace.go:171","msg":"trace[783356345] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"146.829604ms","start":"2024-11-05T17:47:37.922819Z","end":"2024-11-05T17:47:38.069648Z","steps":["trace[783356345] 'process raft request'  (duration: 113.302622ms)","trace[783356345] 'compare'  (duration: 33.404669ms)"],"step_count":2}
	{"level":"info","ts":"2024-11-05T17:47:40.070547Z","caller":"traceutil/trace.go:171","msg":"trace[595590458] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"250.454242ms","start":"2024-11-05T17:47:39.820076Z","end":"2024-11-05T17:47:40.070530Z","steps":["trace[595590458] 'process raft request'  (duration: 250.245832ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:47:40.909345Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.233653ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T17:47:40.909491Z","caller":"traceutil/trace.go:171","msg":"trace[407263335] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:0; response_revision:430; }","duration":"131.389903ms","start":"2024-11-05T17:47:40.778084Z","end":"2024-11-05T17:47:40.909474Z","steps":["trace[407263335] 'agreement among raft nodes before linearized reading'  (duration: 131.182609ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:47:40.909735Z","caller":"traceutil/trace.go:171","msg":"trace[105230021] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"104.30137ms","start":"2024-11-05T17:47:40.805422Z","end":"2024-11-05T17:47:40.909724Z","steps":["trace[105230021] 'process raft request'  (duration: 103.565158ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:47:40.915996Z","caller":"traceutil/trace.go:171","msg":"trace[1784307454] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"110.467836ms","start":"2024-11-05T17:47:40.805509Z","end":"2024-11-05T17:47:40.915977Z","steps":["trace[1784307454] 'process raft request'  (duration: 103.686823ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:47:41.039929Z","caller":"traceutil/trace.go:171","msg":"trace[2096270239] transaction","detail":"{read_only:false; response_revision:431; number_of_response:1; }","duration":"162.384778ms","start":"2024-11-05T17:47:40.877519Z","end":"2024-11-05T17:47:41.039904Z","steps":["trace[2096270239] 'process raft request'  (duration: 49.968612ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:47:41.040215Z","caller":"traceutil/trace.go:171","msg":"trace[474042951] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"162.60689ms","start":"2024-11-05T17:47:40.877593Z","end":"2024-11-05T17:47:41.040200Z","steps":["trace[474042951] 'process raft request'  (duration: 58.920174ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:47:41.093296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.703036ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T17:47:41.093418Z","caller":"traceutil/trace.go:171","msg":"trace[396548148] range","detail":"{range_begin:/registry/clusterrolebindings/minikube-ingress-dns; range_end:; response_count:0; response_revision:437; }","duration":"210.82073ms","start":"2024-11-05T17:47:40.882581Z","end":"2024-11-05T17:47:41.093402Z","steps":["trace[396548148] 'agreement among raft nodes before linearized reading'  (duration: 175.401352ms)","trace[396548148] 'range keys from in-memory index tree'  (duration: 35.248449ms)"],"step_count":2}
	{"level":"warn","ts":"2024-11-05T17:47:41.093647Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.079167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-11-05T17:47:41.056854Z","caller":"traceutil/trace.go:171","msg":"trace[2055039178] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"179.128413ms","start":"2024-11-05T17:47:40.877638Z","end":"2024-11-05T17:47:41.056767Z","steps":["trace[2055039178] 'process raft request'  (duration: 58.923481ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:47:41.057086Z","caller":"traceutil/trace.go:171","msg":"trace[1685440622] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"179.358517ms","start":"2024-11-05T17:47:40.877713Z","end":"2024-11-05T17:47:41.057071Z","steps":["trace[1685440622] 'process raft request'  (duration: 58.87887ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:47:41.057157Z","caller":"traceutil/trace.go:171","msg":"trace[1809960471] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"179.367181ms","start":"2024-11-05T17:47:40.877778Z","end":"2024-11-05T17:47:41.057145Z","steps":["trace[1809960471] 'process raft request'  (duration: 59.284407ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:47:41.057830Z","caller":"traceutil/trace.go:171","msg":"trace[1746615062] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"161.579807ms","start":"2024-11-05T17:47:40.896156Z","end":"2024-11-05T17:47:41.057736Z","steps":["trace[1746615062] 'process raft request'  (duration: 41.202461ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:47:41.057963Z","caller":"traceutil/trace.go:171","msg":"trace[805537849] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"161.36356ms","start":"2024-11-05T17:47:40.896592Z","end":"2024-11-05T17:47:41.057956Z","steps":["trace[805537849] 'process raft request'  (duration: 40.877964ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:47:41.081367Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.308654ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:2586"}
	{"level":"warn","ts":"2024-11-05T17:47:41.092559Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.988609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T17:47:41.096955Z","caller":"traceutil/trace.go:171","msg":"trace[346104435] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"164.017974ms","start":"2024-11-05T17:47:40.932924Z","end":"2024-11-05T17:47:41.096942Z","steps":["trace[346104435] 'process raft request'  (duration: 109.511916ms)","trace[346104435] 'compare'  (duration: 50.544036ms)"],"step_count":2}
	{"level":"info","ts":"2024-11-05T17:47:41.104330Z","caller":"traceutil/trace.go:171","msg":"trace[942692642] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:437; }","duration":"221.755331ms","start":"2024-11-05T17:47:40.882558Z","end":"2024-11-05T17:47:41.104314Z","steps":["trace[942692642] 'agreement among raft nodes before linearized reading'  (duration: 175.476487ms)","trace[942692642] 'get authentication metadata'  (duration: 20.79067ms)","trace[942692642] 'range keys from in-memory index tree'  (duration: 14.769352ms)"],"step_count":3}
	{"level":"info","ts":"2024-11-05T17:47:41.114518Z","caller":"traceutil/trace.go:171","msg":"trace[960500303] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:437; }","duration":"232.458522ms","start":"2024-11-05T17:47:40.882041Z","end":"2024-11-05T17:47:41.114499Z","steps":["trace[960500303] 'agreement among raft nodes before linearized reading'  (duration: 55.846438ms)","trace[960500303] 'range keys from in-memory index tree'  (duration: 143.413281ms)"],"step_count":2}
	{"level":"info","ts":"2024-11-05T17:47:41.115448Z","caller":"traceutil/trace.go:171","msg":"trace[1242843876] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:437; }","duration":"232.803678ms","start":"2024-11-05T17:47:40.882531Z","end":"2024-11-05T17:47:41.115334Z","steps":["trace[1242843876] 'agreement among raft nodes before linearized reading'  (duration: 198.307876ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:56:13 up  1:38,  0 users,  load average: 0.14, 1.45, 2.19
	Linux addons-638421 5.15.0-1071-aws #77~20.04.1-Ubuntu SMP Thu Oct 3 19:34:36 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1fd0ca35d5df4909214da663a3c098da720fc9ed8c6239354e0cb3f8f13bb952] <==
	I1105 17:54:12.252537       1 main.go:301] handling current node
	I1105 17:54:22.252760       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:54:22.252806       1 main.go:301] handling current node
	I1105 17:54:32.252111       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:54:32.252880       1 main.go:301] handling current node
	I1105 17:54:42.252278       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:54:42.252314       1 main.go:301] handling current node
	I1105 17:54:52.253348       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:54:52.253384       1 main.go:301] handling current node
	I1105 17:55:02.258554       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:55:02.258589       1 main.go:301] handling current node
	I1105 17:55:12.252232       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:55:12.252801       1 main.go:301] handling current node
	I1105 17:55:22.258589       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:55:22.258620       1 main.go:301] handling current node
	I1105 17:55:32.256778       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:55:32.256935       1 main.go:301] handling current node
	I1105 17:55:42.251933       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:55:42.251966       1 main.go:301] handling current node
	I1105 17:55:52.258009       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:55:52.258042       1 main.go:301] handling current node
	I1105 17:56:02.251874       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:56:02.251908       1 main.go:301] handling current node
	I1105 17:56:12.252944       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:56:12.254065       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0b5b17e04603731c47b630aef78762da670ddfaa76eeece24dec40da13a7b11a] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1105 17:49:43.243517       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.80.28:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.80.28:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.80.28:443: connect: connection refused" logger="UnhandledError"
	I1105 17:49:43.336586       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1105 17:50:28.907403       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36862: use of closed network connection
	I1105 17:50:38.175976       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.180.43"}
	I1105 17:51:08.743148       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1105 17:51:30.051636       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1105 17:51:30.052066       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1105 17:51:30.067557       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1105 17:51:30.087618       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1105 17:51:30.156275       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1105 17:51:30.156474       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1105 17:51:30.189375       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1105 17:51:30.189491       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1105 17:51:30.210103       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1105 17:51:30.210208       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1105 17:51:31.189185       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1105 17:51:31.211384       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1105 17:51:31.309394       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1105 17:51:43.832999       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1105 17:51:44.959202       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1105 17:51:49.377930       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1105 17:51:49.665307       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.222.162"}
	I1105 17:54:08.252848       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.155.234"}
	
	
	==> kube-controller-manager [bab636744f5f70fc09f4fb8f397ac96095d9a6c9795ef232e477967332c7e4a0] <==
	I1105 17:54:32.985828       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="7.959µs"
	W1105 17:54:33.794392       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:54:33.794432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1105 17:54:40.416426       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-638421"
	I1105 17:54:43.098607       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W1105 17:54:52.861523       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:54:52.861559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1105 17:54:52.995368       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-dc5db94f4" duration="8.616µs"
	W1105 17:54:53.403018       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:54:53.403059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:54:57.932441       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:54:57.932484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1105 17:55:11.179562       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-638421"
	W1105 17:55:23.334926       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:55:23.334968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:55:26.895020       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:55:26.895061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:55:27.346522       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:55:27.346562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:55:39.771757       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:55:39.771806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:56:02.918752       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:56:02.918794       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:56:10.822172       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:56:10.822219       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [4c604d9201f70b0ea751ef48bf305d9c2ebf108c89921e2973776a5c7428292f] <==
	I1105 17:47:40.297748       1 server_linux.go:66] "Using iptables proxy"
	I1105 17:47:41.548830       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1105 17:47:41.649867       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 17:47:41.804409       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1105 17:47:41.831233       1 server_linux.go:169] "Using iptables Proxier"
	I1105 17:47:41.888740       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 17:47:41.889281       1 server.go:483] "Version info" version="v1.31.2"
	I1105 17:47:41.889351       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 17:47:41.904802       1 config.go:328] "Starting node config controller"
	I1105 17:47:41.915320       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 17:47:41.914911       1 config.go:199] "Starting service config controller"
	I1105 17:47:41.971405       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 17:47:41.914932       1 config.go:105] "Starting endpoint slice config controller"
	I1105 17:47:41.971442       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 17:47:42.104715       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 17:47:42.130040       1 shared_informer.go:320] Caches are synced for node config
	I1105 17:47:42.144723       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [c43ffe852947636fbe0c15eba622eb62844e2cf189d07d412f1e024faf56ed4c] <==
	W1105 17:47:29.159014       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1105 17:47:29.159024       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 17:47:29.159095       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1105 17:47:29.159113       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:47:29.159161       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1105 17:47:29.159179       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1105 17:47:29.159223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1105 17:47:29.159238       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:47:29.159279       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1105 17:47:29.159293       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:47:29.159382       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1105 17:47:29.159396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:47:29.159440       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1105 17:47:29.159450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:47:29.159544       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1105 17:47:29.159559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 17:47:29.159596       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1105 17:47:29.159607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 17:47:29.159732       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1105 17:47:29.159743       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1105 17:47:29.159854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1105 17:47:29.159867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:47:29.160131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1105 17:47:29.160151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1105 17:47:30.253645       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 05 17:54:53 addons-638421 kubelet[1497]: I1105 17:54:53.324395    1497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cafda1e68ab388540078ba598c56c1e3108e0766d1971373c784500cb3221514"} err="failed to get container status \"cafda1e68ab388540078ba598c56c1e3108e0766d1971373c784500cb3221514\": rpc error: code = NotFound desc = could not find container \"cafda1e68ab388540078ba598c56c1e3108e0766d1971373c784500cb3221514\": container with ID starting with cafda1e68ab388540078ba598c56c1e3108e0766d1971373c784500cb3221514 not found: ID does not exist"
	Nov 05 17:54:53 addons-638421 kubelet[1497]: I1105 17:54:53.356741    1497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4pqg\" (UniqueName: \"kubernetes.io/projected/f3640f53-16bd-4dba-bbdc-f9ea46052384-kube-api-access-l4pqg\") pod \"f3640f53-16bd-4dba-bbdc-f9ea46052384\" (UID: \"f3640f53-16bd-4dba-bbdc-f9ea46052384\") "
	Nov 05 17:54:53 addons-638421 kubelet[1497]: I1105 17:54:53.360908    1497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3640f53-16bd-4dba-bbdc-f9ea46052384-kube-api-access-l4pqg" (OuterVolumeSpecName: "kube-api-access-l4pqg") pod "f3640f53-16bd-4dba-bbdc-f9ea46052384" (UID: "f3640f53-16bd-4dba-bbdc-f9ea46052384"). InnerVolumeSpecName "kube-api-access-l4pqg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 05 17:54:53 addons-638421 kubelet[1497]: I1105 17:54:53.457180    1497 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-l4pqg\" (UniqueName: \"kubernetes.io/projected/f3640f53-16bd-4dba-bbdc-f9ea46052384-kube-api-access-l4pqg\") on node \"addons-638421\" DevicePath \"\""
	Nov 05 17:54:54 addons-638421 kubelet[1497]: I1105 17:54:54.801106    1497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3640f53-16bd-4dba-bbdc-f9ea46052384" path="/var/lib/kubelet/pods/f3640f53-16bd-4dba-bbdc-f9ea46052384/volumes"
	Nov 05 17:55:01 addons-638421 kubelet[1497]: E1105 17:55:01.122695    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829301122462052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:614091,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:55:01 addons-638421 kubelet[1497]: E1105 17:55:01.122739    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829301122462052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:614091,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:55:11 addons-638421 kubelet[1497]: E1105 17:55:11.125694    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829311125473666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:614091,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:55:11 addons-638421 kubelet[1497]: E1105 17:55:11.125735    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829311125473666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:614091,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:55:21 addons-638421 kubelet[1497]: E1105 17:55:21.128564    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829321128193668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:614091,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:55:21 addons-638421 kubelet[1497]: E1105 17:55:21.129044    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829321128193668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:614091,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:55:31 addons-638421 kubelet[1497]: E1105 17:55:31.131647    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829331131371045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:614091,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:55:31 addons-638421 kubelet[1497]: E1105 17:55:31.131683    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829331131371045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:614091,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:55:31 addons-638421 kubelet[1497]: I1105 17:55:31.227839    1497 scope.go:117] "RemoveContainer" containerID="41add762017b261bc27f9ddaddcafa942816cf1b9d639094682a93fa1716d04c"
	Nov 05 17:55:31 addons-638421 kubelet[1497]: I1105 17:55:31.246380    1497 scope.go:117] "RemoveContainer" containerID="3144822571a0c33e03409c8cb17aac56cbb5dec676e5de2b64f39fbcf7775a3c"
	Nov 05 17:55:31 addons-638421 kubelet[1497]: I1105 17:55:31.264707    1497 scope.go:117] "RemoveContainer" containerID="03251d61ea8b94384b0a9cac29dcdd5a4a7b69cc4775c3b5b8246a9c8481b664"
	Nov 05 17:55:41 addons-638421 kubelet[1497]: E1105 17:55:41.134777    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829341134539033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:614091,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:55:41 addons-638421 kubelet[1497]: E1105 17:55:41.134811    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829341134539033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:614091,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:55:51 addons-638421 kubelet[1497]: E1105 17:55:51.137413    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829351137166949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:614091,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:55:51 addons-638421 kubelet[1497]: E1105 17:55:51.137456    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829351137166949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:614091,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:55:53 addons-638421 kubelet[1497]: I1105 17:55:53.800283    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 05 17:56:01 addons-638421 kubelet[1497]: E1105 17:56:01.140005    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829361139766513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:614091,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:56:01 addons-638421 kubelet[1497]: E1105 17:56:01.140054    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829361139766513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:614091,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:56:11 addons-638421 kubelet[1497]: E1105 17:56:11.142730    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829371142494412,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:614091,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:56:11 addons-638421 kubelet[1497]: E1105 17:56:11.142773    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829371142494412,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:614091,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [928fd37a67be862e9b98e4c48f69508228c66790d1dbda30812c2f629b00bf18] <==
	I1105 17:48:23.204145       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1105 17:48:23.233817       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1105 17:48:23.233927       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1105 17:48:23.277526       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1105 17:48:23.277774       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-638421_30695142-7a44-4293-8e9e-e3d697d8213d!
	I1105 17:48:23.280167       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3fd2ffbe-9a3d-4013-9e87-0e75777dbe6e", APIVersion:"v1", ResourceVersion:"912", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-638421_30695142-7a44-4293-8e9e-e3d697d8213d became leader
	I1105 17:48:23.378627       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-638421_30695142-7a44-4293-8e9e-e3d697d8213d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-638421 -n addons-638421
helpers_test.go:261: (dbg) Run:  kubectl --context addons-638421 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-638421 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (319.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-256890 node delete m03 -v=7 --alsologtostderr: (11.625140765s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:518: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-256890       NotReady   control-plane   7m41s   v1.31.2
	ha-256890-m02   Ready      control-plane   7m15s   v1.31.2
	ha-256890-m04   Ready      <none>          4m51s   v1.31.2

                                                
                                                
-- /stdout --
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:526: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-256890
helpers_test.go:235: (dbg) docker inspect ha-256890:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2049705509a4b93d716c958a7dfd3aaa22510b2739881bd24383c55492878c14",
	        "Created": "2024-11-05T18:00:49.901826665Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335778,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-11-05T18:05:58.112334352Z",
	            "FinishedAt": "2024-11-05T18:05:57.479550989Z"
	        },
	        "Image": "sha256:b9c385cbd7184c9dd77d4bc379a996635e559e337cc53655e2d39219017c804c",
	        "ResolvConfPath": "/var/lib/docker/containers/2049705509a4b93d716c958a7dfd3aaa22510b2739881bd24383c55492878c14/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2049705509a4b93d716c958a7dfd3aaa22510b2739881bd24383c55492878c14/hostname",
	        "HostsPath": "/var/lib/docker/containers/2049705509a4b93d716c958a7dfd3aaa22510b2739881bd24383c55492878c14/hosts",
	        "LogPath": "/var/lib/docker/containers/2049705509a4b93d716c958a7dfd3aaa22510b2739881bd24383c55492878c14/2049705509a4b93d716c958a7dfd3aaa22510b2739881bd24383c55492878c14-json.log",
	        "Name": "/ha-256890",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-256890:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-256890",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0840b7a1e51764193eff83fab694ba51da40565b9fcea8f29cd7035ae4a3811a-init/diff:/var/lib/docker/overlay2/f1c041cd086a3a2db4f768b1c920339fb85fb20492664e0532c0f72dc744887a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0840b7a1e51764193eff83fab694ba51da40565b9fcea8f29cd7035ae4a3811a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0840b7a1e51764193eff83fab694ba51da40565b9fcea8f29cd7035ae4a3811a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0840b7a1e51764193eff83fab694ba51da40565b9fcea8f29cd7035ae4a3811a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-256890",
	                "Source": "/var/lib/docker/volumes/ha-256890/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-256890",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-256890",
	                "name.minikube.sigs.k8s.io": "ha-256890",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f804ad221c66d17d2cff094283b4c2f37f8e1854d83fc6b9b5ed1921cfef9296",
	            "SandboxKey": "/var/run/docker/netns/f804ad221c66",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33179"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33178"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-256890": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "620990126bf3e08667686e40394cc00c03a114258f8678c31651ceeee2a053a0",
	                    "EndpointID": "de8fb2288a4b14e9c953321ccd9a0e0e4b6bfbaf5e15cad557f15371ce809e5c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-256890",
	                        "2049705509a4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-256890 -n ha-256890
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ha-256890 logs -n 25: (2.228072212s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-256890 ssh -n                                                                | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:04 UTC | 05 Nov 24 18:04 UTC |
	|         | ha-256890-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-256890 ssh -n ha-256890-m02 sudo cat                                         | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:04 UTC | 05 Nov 24 18:04 UTC |
	|         | /home/docker/cp-test_ha-256890-m03_ha-256890-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-256890 cp ha-256890-m03:/home/docker/cp-test.txt                             | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:04 UTC | 05 Nov 24 18:04 UTC |
	|         | ha-256890-m04:/home/docker/cp-test_ha-256890-m03_ha-256890-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-256890 ssh -n                                                                | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:04 UTC | 05 Nov 24 18:04 UTC |
	|         | ha-256890-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-256890 ssh -n ha-256890-m04 sudo cat                                         | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:04 UTC | 05 Nov 24 18:04 UTC |
	|         | /home/docker/cp-test_ha-256890-m03_ha-256890-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-256890 cp testdata/cp-test.txt                                               | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:04 UTC | 05 Nov 24 18:04 UTC |
	|         | ha-256890-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-256890 ssh -n                                                                | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:04 UTC | 05 Nov 24 18:04 UTC |
	|         | ha-256890-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-256890 cp ha-256890-m04:/home/docker/cp-test.txt                             | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:04 UTC | 05 Nov 24 18:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile208591027/001/cp-test_ha-256890-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-256890 ssh -n                                                                | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:04 UTC | 05 Nov 24 18:04 UTC |
	|         | ha-256890-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-256890 cp ha-256890-m04:/home/docker/cp-test.txt                             | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:04 UTC | 05 Nov 24 18:04 UTC |
	|         | ha-256890:/home/docker/cp-test_ha-256890-m04_ha-256890.txt                      |           |         |         |                     |                     |
	| ssh     | ha-256890 ssh -n                                                                | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:04 UTC | 05 Nov 24 18:04 UTC |
	|         | ha-256890-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-256890 ssh -n ha-256890 sudo cat                                             | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:04 UTC | 05 Nov 24 18:04 UTC |
	|         | /home/docker/cp-test_ha-256890-m04_ha-256890.txt                                |           |         |         |                     |                     |
	| cp      | ha-256890 cp ha-256890-m04:/home/docker/cp-test.txt                             | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:04 UTC | 05 Nov 24 18:04 UTC |
	|         | ha-256890-m02:/home/docker/cp-test_ha-256890-m04_ha-256890-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-256890 ssh -n                                                                | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:04 UTC | 05 Nov 24 18:04 UTC |
	|         | ha-256890-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-256890 ssh -n ha-256890-m02 sudo cat                                         | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:04 UTC | 05 Nov 24 18:04 UTC |
	|         | /home/docker/cp-test_ha-256890-m04_ha-256890-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-256890 cp ha-256890-m04:/home/docker/cp-test.txt                             | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:04 UTC | 05 Nov 24 18:04 UTC |
	|         | ha-256890-m03:/home/docker/cp-test_ha-256890-m04_ha-256890-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-256890 ssh -n                                                                | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:04 UTC | 05 Nov 24 18:04 UTC |
	|         | ha-256890-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-256890 ssh -n ha-256890-m03 sudo cat                                         | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:04 UTC | 05 Nov 24 18:04 UTC |
	|         | /home/docker/cp-test_ha-256890-m04_ha-256890-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-256890 node stop m02 -v=7                                                    | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:04 UTC | 05 Nov 24 18:04 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-256890 node start m02 -v=7                                                   | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:04 UTC | 05 Nov 24 18:05 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-256890 -v=7                                                          | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:05 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-256890 -v=7                                                               | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:05 UTC | 05 Nov 24 18:05 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-256890 --wait=true -v=7                                                   | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:05 UTC | 05 Nov 24 18:08 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-256890                                                               | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:08 UTC |                     |
	| node    | ha-256890 node delete m03 -v=7                                                  | ha-256890 | jenkins | v1.34.0 | 05 Nov 24 18:08 UTC | 05 Nov 24 18:08 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 18:05:57
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 18:05:57.767818  335586 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:05:57.768003  335586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:05:57.768013  335586 out.go:358] Setting ErrFile to fd 2...
	I1105 18:05:57.768019  335586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:05:57.768273  335586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-279806/.minikube/bin
	I1105 18:05:57.768703  335586 out.go:352] Setting JSON to false
	I1105 18:05:57.769614  335586 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6501,"bootTime":1730823457,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1105 18:05:57.769687  335586 start.go:139] virtualization:  
	I1105 18:05:57.773386  335586 out.go:177] * [ha-256890] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1105 18:05:57.776162  335586 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:05:57.776188  335586 notify.go:220] Checking for updates...
	I1105 18:05:57.781740  335586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:05:57.784801  335586 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-279806/kubeconfig
	I1105 18:05:57.787470  335586 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-279806/.minikube
	I1105 18:05:57.789983  335586 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1105 18:05:57.792543  335586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:05:57.795785  335586 config.go:182] Loaded profile config "ha-256890": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:05:57.795898  335586 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:05:57.820093  335586 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1105 18:05:57.820227  335586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 18:05:57.871981  335586 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:41 SystemTime:2024-11-05 18:05:57.862401557 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1105 18:05:57.872099  335586 docker.go:318] overlay module found
	I1105 18:05:57.876843  335586 out.go:177] * Using the docker driver based on existing profile
	I1105 18:05:57.879510  335586 start.go:297] selected driver: docker
	I1105 18:05:57.879527  335586 start.go:901] validating driver "docker" against &{Name:ha-256890 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-256890 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:05:57.879688  335586 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:05:57.879795  335586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 18:05:57.932627  335586 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:41 SystemTime:2024-11-05 18:05:57.923347845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1105 18:05:57.933051  335586 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:05:57.933078  335586 cni.go:84] Creating CNI manager for ""
	I1105 18:05:57.933131  335586 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1105 18:05:57.933185  335586 start.go:340] cluster config:
	{Name:ha-256890 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-256890 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:05:57.936074  335586 out.go:177] * Starting "ha-256890" primary control-plane node in "ha-256890" cluster
	I1105 18:05:57.938385  335586 cache.go:121] Beginning downloading kic base image for docker with crio
	I1105 18:05:57.941052  335586 out.go:177] * Pulling base image v0.0.45-1730282848-19883 ...
	I1105 18:05:57.943696  335586 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:05:57.943752  335586 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local docker daemon
	I1105 18:05:57.943835  335586 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-279806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
	I1105 18:05:57.943848  335586 cache.go:56] Caching tarball of preloaded images
	I1105 18:05:57.943924  335586 preload.go:172] Found /home/jenkins/minikube-integration/19910-279806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1105 18:05:57.943931  335586 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:05:57.944070  335586 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/config.json ...
	I1105 18:05:57.962836  335586 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local docker daemon, skipping pull
	I1105 18:05:57.962860  335586 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 exists in daemon, skipping load
	I1105 18:05:57.962875  335586 cache.go:194] Successfully downloaded all kic artifacts
	I1105 18:05:57.962905  335586 start.go:360] acquireMachinesLock for ha-256890: {Name:mk66592f711eb1a404d9d15e48a25648f7fdb464 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:05:57.962967  335586 start.go:364] duration metric: took 35.521µs to acquireMachinesLock for "ha-256890"
	I1105 18:05:57.962988  335586 start.go:96] Skipping create...Using existing machine configuration
	I1105 18:05:57.962998  335586 fix.go:54] fixHost starting: 
	I1105 18:05:57.963249  335586 cli_runner.go:164] Run: docker container inspect ha-256890 --format={{.State.Status}}
	I1105 18:05:57.979246  335586 fix.go:112] recreateIfNeeded on ha-256890: state=Stopped err=<nil>
	W1105 18:05:57.979275  335586 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 18:05:57.982354  335586 out.go:177] * Restarting existing docker container for "ha-256890" ...
	I1105 18:05:57.985003  335586 cli_runner.go:164] Run: docker start ha-256890
	I1105 18:05:58.288268  335586 cli_runner.go:164] Run: docker container inspect ha-256890 --format={{.State.Status}}
	I1105 18:05:58.308712  335586 kic.go:430] container "ha-256890" state is running.
	I1105 18:05:58.309077  335586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-256890
	I1105 18:05:58.328767  335586 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/config.json ...
	I1105 18:05:58.329020  335586 machine.go:93] provisionDockerMachine start ...
	I1105 18:05:58.329077  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890
	I1105 18:05:58.346933  335586 main.go:141] libmachine: Using SSH client type: native
	I1105 18:05:58.347394  335586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33175 <nil> <nil>}
	I1105 18:05:58.347408  335586 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 18:05:58.348052  335586 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1105 18:06:01.468272  335586 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256890
	
	I1105 18:06:01.468298  335586 ubuntu.go:169] provisioning hostname "ha-256890"
	I1105 18:06:01.468382  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890
	I1105 18:06:01.489827  335586 main.go:141] libmachine: Using SSH client type: native
	I1105 18:06:01.490102  335586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33175 <nil> <nil>}
	I1105 18:06:01.490118  335586 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256890 && echo "ha-256890" | sudo tee /etc/hostname
	I1105 18:06:01.625141  335586 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256890
	
	I1105 18:06:01.625262  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890
	I1105 18:06:01.643276  335586 main.go:141] libmachine: Using SSH client type: native
	I1105 18:06:01.643587  335586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33175 <nil> <nil>}
	I1105 18:06:01.643609  335586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256890' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256890/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256890' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:06:01.768678  335586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:06:01.768706  335586 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19910-279806/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-279806/.minikube}
	I1105 18:06:01.768734  335586 ubuntu.go:177] setting up certificates
	I1105 18:06:01.768746  335586 provision.go:84] configureAuth start
	I1105 18:06:01.768809  335586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-256890
	I1105 18:06:01.786538  335586 provision.go:143] copyHostCerts
	I1105 18:06:01.786586  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19910-279806/.minikube/ca.pem
	I1105 18:06:01.786626  335586 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-279806/.minikube/ca.pem, removing ...
	I1105 18:06:01.786640  335586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.pem
	I1105 18:06:01.786720  335586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-279806/.minikube/ca.pem (1078 bytes)
	I1105 18:06:01.786822  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19910-279806/.minikube/cert.pem
	I1105 18:06:01.786845  335586 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-279806/.minikube/cert.pem, removing ...
	I1105 18:06:01.786850  335586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-279806/.minikube/cert.pem
	I1105 18:06:01.786881  335586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-279806/.minikube/cert.pem (1123 bytes)
	I1105 18:06:01.786938  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19910-279806/.minikube/key.pem
	I1105 18:06:01.786961  335586 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-279806/.minikube/key.pem, removing ...
	I1105 18:06:01.786968  335586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-279806/.minikube/key.pem
	I1105 18:06:01.786994  335586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-279806/.minikube/key.pem (1679 bytes)
	I1105 18:06:01.787061  335586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca-key.pem org=jenkins.ha-256890 san=[127.0.0.1 192.168.49.2 ha-256890 localhost minikube]
	I1105 18:06:02.153751  335586 provision.go:177] copyRemoteCerts
	I1105 18:06:02.153827  335586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:06:02.153880  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890
	I1105 18:06:02.171706  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890/id_rsa Username:docker}
	I1105 18:06:02.261535  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 18:06:02.261601  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1105 18:06:02.287144  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 18:06:02.287213  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1105 18:06:02.312627  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 18:06:02.312690  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 18:06:02.336845  335586 provision.go:87] duration metric: took 568.08498ms to configureAuth
	I1105 18:06:02.336870  335586 ubuntu.go:193] setting minikube options for container-runtime
	I1105 18:06:02.337109  335586 config.go:182] Loaded profile config "ha-256890": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:06:02.337220  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890
	I1105 18:06:02.353181  335586 main.go:141] libmachine: Using SSH client type: native
	I1105 18:06:02.353432  335586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33175 <nil> <nil>}
	I1105 18:06:02.353454  335586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:06:02.759261  335586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:06:02.759288  335586 machine.go:96] duration metric: took 4.430257926s to provisionDockerMachine
	I1105 18:06:02.759301  335586 start.go:293] postStartSetup for "ha-256890" (driver="docker")
	I1105 18:06:02.759312  335586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:06:02.759375  335586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:06:02.759420  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890
	I1105 18:06:02.782801  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890/id_rsa Username:docker}
	I1105 18:06:02.874410  335586 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:06:02.877595  335586 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1105 18:06:02.877629  335586 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1105 18:06:02.877640  335586 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1105 18:06:02.877647  335586 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1105 18:06:02.877660  335586 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-279806/.minikube/addons for local assets ...
	I1105 18:06:02.877718  335586 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-279806/.minikube/files for local assets ...
	I1105 18:06:02.877797  335586 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem -> 2851882.pem in /etc/ssl/certs
	I1105 18:06:02.877808  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem -> /etc/ssl/certs/2851882.pem
	I1105 18:06:02.877911  335586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:06:02.886356  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem --> /etc/ssl/certs/2851882.pem (1708 bytes)
	I1105 18:06:02.911571  335586 start.go:296] duration metric: took 152.254648ms for postStartSetup
	I1105 18:06:02.911687  335586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 18:06:02.911731  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890
	I1105 18:06:02.931684  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890/id_rsa Username:docker}
	I1105 18:06:03.017867  335586 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1105 18:06:03.022921  335586 fix.go:56] duration metric: took 5.059914862s for fixHost
	I1105 18:06:03.022950  335586 start.go:83] releasing machines lock for "ha-256890", held for 5.059970977s
	I1105 18:06:03.023031  335586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-256890
	I1105 18:06:03.039959  335586 ssh_runner.go:195] Run: cat /version.json
	I1105 18:06:03.040014  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890
	I1105 18:06:03.040109  335586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:06:03.040162  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890
	I1105 18:06:03.059129  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890/id_rsa Username:docker}
	I1105 18:06:03.070285  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890/id_rsa Username:docker}
	I1105 18:06:03.283476  335586 ssh_runner.go:195] Run: systemctl --version
	I1105 18:06:03.287630  335586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:06:03.427370  335586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 18:06:03.431610  335586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:06:03.440588  335586 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1105 18:06:03.440687  335586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:06:03.449506  335586 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1105 18:06:03.449532  335586 start.go:495] detecting cgroup driver to use...
	I1105 18:06:03.449585  335586 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1105 18:06:03.449654  335586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:06:03.461530  335586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:06:03.473201  335586 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:06:03.473273  335586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:06:03.486062  335586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:06:03.497188  335586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:06:03.578640  335586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:06:03.672895  335586 docker.go:233] disabling docker service ...
	I1105 18:06:03.673013  335586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:06:03.685137  335586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:06:03.696747  335586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:06:03.783353  335586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:06:03.863505  335586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:06:03.874747  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:06:03.892540  335586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:06:03.892646  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:06:03.902858  335586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:06:03.902955  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:06:03.912617  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:06:03.921996  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:06:03.931606  335586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:06:03.940289  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:06:03.950040  335586 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:06:03.959375  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:06:03.968907  335586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:06:03.977583  335586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:06:03.985983  335586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:06:04.064716  335586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:06:04.183740  335586 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:06:04.183821  335586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:06:04.187602  335586 start.go:563] Will wait 60s for crictl version
	I1105 18:06:04.187718  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:06:04.191132  335586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:06:04.233401  335586 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1105 18:06:04.233491  335586 ssh_runner.go:195] Run: crio --version
	I1105 18:06:04.270820  335586 ssh_runner.go:195] Run: crio --version
	I1105 18:06:04.310123  335586 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1105 18:06:04.312630  335586 cli_runner.go:164] Run: docker network inspect ha-256890 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1105 18:06:04.326484  335586 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1105 18:06:04.330162  335586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:06:04.340853  335586 kubeadm.go:883] updating cluster {Name:ha-256890 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-256890 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 18:06:04.341007  335586 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:06:04.341074  335586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:06:04.389719  335586 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 18:06:04.389744  335586 crio.go:433] Images already preloaded, skipping extraction
	I1105 18:06:04.389814  335586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 18:06:04.429952  335586 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 18:06:04.429977  335586 cache_images.go:84] Images are preloaded, skipping loading
	I1105 18:06:04.429988  335586 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1105 18:06:04.430151  335586 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-256890 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-256890 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:06:04.430241  335586 ssh_runner.go:195] Run: crio config
	I1105 18:06:04.482164  335586 cni.go:84] Creating CNI manager for ""
	I1105 18:06:04.482190  335586 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1105 18:06:04.482200  335586 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 18:06:04.482225  335586 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-256890 NodeName:ha-256890 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 18:06:04.482361  335586 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-256890"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 18:06:04.482386  335586 kube-vip.go:115] generating kube-vip config ...
	I1105 18:06:04.482438  335586 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1105 18:06:04.494939  335586 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 18:06:04.495049  335586 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 18:06:04.495113  335586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:06:04.503816  335586 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 18:06:04.503913  335586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1105 18:06:04.512298  335586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1105 18:06:04.530070  335586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:06:04.547016  335586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2283 bytes)
	I1105 18:06:04.563894  335586 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1105 18:06:04.581441  335586 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1105 18:06:04.584940  335586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:06:04.595461  335586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:06:04.671533  335586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:06:04.684447  335586 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890 for IP: 192.168.49.2
	I1105 18:06:04.684515  335586 certs.go:194] generating shared ca certs ...
	I1105 18:06:04.684547  335586 certs.go:226] acquiring lock for ca certs: {Name:mk7e394808202081d7250bf8ad59a3f119279ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:06:04.684810  335586 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.key
	I1105 18:06:04.684885  335586 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.key
	I1105 18:06:04.684899  335586 certs.go:256] generating profile certs ...
	I1105 18:06:04.685029  335586 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/client.key
	I1105 18:06:04.685061  335586 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.key.aa57ebe3
	I1105 18:06:04.685083  335586 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.crt.aa57ebe3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1105 18:06:05.165005  335586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.crt.aa57ebe3 ...
	I1105 18:06:05.165038  335586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.crt.aa57ebe3: {Name:mk733ff4f12a3ed799c5944e778c679cb29e3f64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:06:05.165241  335586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.key.aa57ebe3 ...
	I1105 18:06:05.165256  335586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.key.aa57ebe3: {Name:mk84cc1d095ad188c3b66ed68d6c9c081d470433 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:06:05.165344  335586 certs.go:381] copying /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.crt.aa57ebe3 -> /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.crt
	I1105 18:06:05.165492  335586 certs.go:385] copying /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.key.aa57ebe3 -> /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.key
	I1105 18:06:05.165641  335586 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/proxy-client.key
	I1105 18:06:05.165660  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 18:06:05.165676  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 18:06:05.165692  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 18:06:05.165704  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 18:06:05.165724  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 18:06:05.165745  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 18:06:05.165763  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 18:06:05.165777  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 18:06:05.165828  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/285188.pem (1338 bytes)
	W1105 18:06:05.165862  335586 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-279806/.minikube/certs/285188_empty.pem, impossibly tiny 0 bytes
	I1105 18:06:05.165876  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 18:06:05.165902  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem (1078 bytes)
	I1105 18:06:05.165933  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:06:05.165957  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem (1679 bytes)
	I1105 18:06:05.166003  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem (1708 bytes)
	I1105 18:06:05.166037  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/285188.pem -> /usr/share/ca-certificates/285188.pem
	I1105 18:06:05.166054  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem -> /usr/share/ca-certificates/2851882.pem
	I1105 18:06:05.166070  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:06:05.166727  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:06:05.193014  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 18:06:05.217653  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:06:05.243176  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 18:06:05.268187  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1105 18:06:05.292662  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 18:06:05.317581  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:06:05.342368  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 18:06:05.366729  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/certs/285188.pem --> /usr/share/ca-certificates/285188.pem (1338 bytes)
	I1105 18:06:05.390218  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem --> /usr/share/ca-certificates/2851882.pem (1708 bytes)
	I1105 18:06:05.414008  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:06:05.437526  335586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 18:06:05.455488  335586 ssh_runner.go:195] Run: openssl version
	I1105 18:06:05.460970  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/285188.pem && ln -fs /usr/share/ca-certificates/285188.pem /etc/ssl/certs/285188.pem"
	I1105 18:06:05.470421  335586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/285188.pem
	I1105 18:06:05.474344  335586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:57 /usr/share/ca-certificates/285188.pem
	I1105 18:06:05.474434  335586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/285188.pem
	I1105 18:06:05.481392  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/285188.pem /etc/ssl/certs/51391683.0"
	I1105 18:06:05.490427  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2851882.pem && ln -fs /usr/share/ca-certificates/2851882.pem /etc/ssl/certs/2851882.pem"
	I1105 18:06:05.499658  335586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2851882.pem
	I1105 18:06:05.503305  335586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:57 /usr/share/ca-certificates/2851882.pem
	I1105 18:06:05.503387  335586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2851882.pem
	I1105 18:06:05.510629  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2851882.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:06:05.520151  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:06:05.529691  335586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:06:05.533552  335586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:47 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:06:05.533692  335586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:06:05.540938  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:06:05.549983  335586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:06:05.553716  335586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 18:06:05.560767  335586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 18:06:05.567966  335586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 18:06:05.574959  335586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 18:06:05.582390  335586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 18:06:05.589173  335586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 18:06:05.595944  335586 kubeadm.go:392] StartCluster: {Name:ha-256890 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-256890 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:06:05.596075  335586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 18:06:05.596136  335586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 18:06:05.637055  335586 cri.go:89] found id: ""
	I1105 18:06:05.637126  335586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 18:06:05.647778  335586 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1105 18:06:05.647804  335586 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1105 18:06:05.647870  335586 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1105 18:06:05.657358  335586 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1105 18:06:05.657824  335586 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-256890" does not appear in /home/jenkins/minikube-integration/19910-279806/kubeconfig
	I1105 18:06:05.657946  335586 kubeconfig.go:62] /home/jenkins/minikube-integration/19910-279806/kubeconfig needs updating (will repair): [kubeconfig missing "ha-256890" cluster setting kubeconfig missing "ha-256890" context setting]
	I1105 18:06:05.658215  335586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/kubeconfig: {Name:mk94e1e77f14516629f7a9763439bf1ac2a3fdbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:06:05.658615  335586 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19910-279806/kubeconfig
	I1105 18:06:05.658893  335586 kapi.go:59] client config for ha-256890: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/client.crt", KeyFile:"/home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/client.key", CAFile:"/home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e9d0d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1105 18:06:05.659558  335586 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1105 18:06:05.659648  335586 cert_rotation.go:140] Starting client certificate rotation controller
	I1105 18:06:05.671660  335586 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I1105 18:06:05.671728  335586 kubeadm.go:597] duration metric: took 23.916477ms to restartPrimaryControlPlane
	I1105 18:06:05.671744  335586 kubeadm.go:394] duration metric: took 75.808166ms to StartCluster
	I1105 18:06:05.671760  335586 settings.go:142] acquiring lock: {Name:mk4446dbaea3bd85b9adc705341ee771323ec865 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:06:05.671854  335586 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-279806/kubeconfig
	I1105 18:06:05.672529  335586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-279806/kubeconfig: {Name:mk94e1e77f14516629f7a9763439bf1ac2a3fdbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:06:05.672782  335586 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:06:05.672808  335586 start.go:241] waiting for startup goroutines ...
	I1105 18:06:05.672828  335586 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1105 18:06:05.673099  335586 config.go:182] Loaded profile config "ha-256890": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:06:05.678981  335586 out.go:177] * Enabled addons: 
	I1105 18:06:05.681112  335586 addons.go:510] duration metric: took 8.290432ms for enable addons: enabled=[]
	I1105 18:06:05.681149  335586 start.go:246] waiting for cluster config update ...
	I1105 18:06:05.681159  335586 start.go:255] writing updated cluster config ...
	I1105 18:06:05.683753  335586 out.go:201] 
	I1105 18:06:05.686305  335586 config.go:182] Loaded profile config "ha-256890": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:06:05.686419  335586 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/config.json ...
	I1105 18:06:05.689290  335586 out.go:177] * Starting "ha-256890-m02" control-plane node in "ha-256890" cluster
	I1105 18:06:05.691622  335586 cache.go:121] Beginning downloading kic base image for docker with crio
	I1105 18:06:05.694109  335586 out.go:177] * Pulling base image v0.0.45-1730282848-19883 ...
	I1105 18:06:05.696588  335586 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:06:05.696626  335586 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local docker daemon
	I1105 18:06:05.696864  335586 cache.go:56] Caching tarball of preloaded images
	I1105 18:06:05.696953  335586 preload.go:172] Found /home/jenkins/minikube-integration/19910-279806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1105 18:06:05.696966  335586 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:06:05.697131  335586 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/config.json ...
	I1105 18:06:05.715352  335586 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local docker daemon, skipping pull
	I1105 18:06:05.715376  335586 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 exists in daemon, skipping load
	I1105 18:06:05.715396  335586 cache.go:194] Successfully downloaded all kic artifacts
	I1105 18:06:05.715422  335586 start.go:360] acquireMachinesLock for ha-256890-m02: {Name:mk12933be610fb354c17cbb158595ad27f7f230e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:06:05.715488  335586 start.go:364] duration metric: took 45.653µs to acquireMachinesLock for "ha-256890-m02"
	I1105 18:06:05.715509  335586 start.go:96] Skipping create...Using existing machine configuration
	I1105 18:06:05.715522  335586 fix.go:54] fixHost starting: m02
	I1105 18:06:05.715770  335586 cli_runner.go:164] Run: docker container inspect ha-256890-m02 --format={{.State.Status}}
	I1105 18:06:05.731838  335586 fix.go:112] recreateIfNeeded on ha-256890-m02: state=Stopped err=<nil>
	W1105 18:06:05.731874  335586 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 18:06:05.734729  335586 out.go:177] * Restarting existing docker container for "ha-256890-m02" ...
	I1105 18:06:05.737407  335586 cli_runner.go:164] Run: docker start ha-256890-m02
	I1105 18:06:06.013448  335586 cli_runner.go:164] Run: docker container inspect ha-256890-m02 --format={{.State.Status}}
	I1105 18:06:06.035678  335586 kic.go:430] container "ha-256890-m02" state is running.
	I1105 18:06:06.036042  335586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-256890-m02
	I1105 18:06:06.063254  335586 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/config.json ...
	I1105 18:06:06.063501  335586 machine.go:93] provisionDockerMachine start ...
	I1105 18:06:06.063558  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m02
	I1105 18:06:06.088349  335586 main.go:141] libmachine: Using SSH client type: native
	I1105 18:06:06.088646  335586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1105 18:06:06.088662  335586 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 18:06:06.089204  335586 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59514->127.0.0.1:33180: read: connection reset by peer
	I1105 18:06:09.275232  335586 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256890-m02
	
	I1105 18:06:09.275305  335586 ubuntu.go:169] provisioning hostname "ha-256890-m02"
	I1105 18:06:09.275411  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m02
	I1105 18:06:09.301156  335586 main.go:141] libmachine: Using SSH client type: native
	I1105 18:06:09.301394  335586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1105 18:06:09.301406  335586 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256890-m02 && echo "ha-256890-m02" | sudo tee /etc/hostname
	I1105 18:06:09.503149  335586 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256890-m02
	
	I1105 18:06:09.503338  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m02
	I1105 18:06:09.537283  335586 main.go:141] libmachine: Using SSH client type: native
	I1105 18:06:09.537525  335586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1105 18:06:09.537541  335586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256890-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256890-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256890-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:06:09.726069  335586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:06:09.726142  335586 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19910-279806/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-279806/.minikube}
	I1105 18:06:09.726172  335586 ubuntu.go:177] setting up certificates
	I1105 18:06:09.726211  335586 provision.go:84] configureAuth start
	I1105 18:06:09.726303  335586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-256890-m02
	I1105 18:06:09.749624  335586 provision.go:143] copyHostCerts
	I1105 18:06:09.749664  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19910-279806/.minikube/ca.pem
	I1105 18:06:09.749696  335586 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-279806/.minikube/ca.pem, removing ...
	I1105 18:06:09.749703  335586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.pem
	I1105 18:06:09.749779  335586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-279806/.minikube/ca.pem (1078 bytes)
	I1105 18:06:09.749853  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19910-279806/.minikube/cert.pem
	I1105 18:06:09.749870  335586 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-279806/.minikube/cert.pem, removing ...
	I1105 18:06:09.749874  335586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-279806/.minikube/cert.pem
	I1105 18:06:09.749900  335586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-279806/.minikube/cert.pem (1123 bytes)
	I1105 18:06:09.749935  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19910-279806/.minikube/key.pem
	I1105 18:06:09.749950  335586 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-279806/.minikube/key.pem, removing ...
	I1105 18:06:09.749954  335586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-279806/.minikube/key.pem
	I1105 18:06:09.749993  335586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-279806/.minikube/key.pem (1679 bytes)
	I1105 18:06:09.750046  335586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca-key.pem org=jenkins.ha-256890-m02 san=[127.0.0.1 192.168.49.3 ha-256890-m02 localhost minikube]
	I1105 18:06:10.433983  335586 provision.go:177] copyRemoteCerts
	I1105 18:06:10.434053  335586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:06:10.434110  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m02
	I1105 18:06:10.450853  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890-m02/id_rsa Username:docker}
	I1105 18:06:10.541297  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 18:06:10.541359  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1105 18:06:10.565361  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 18:06:10.565427  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 18:06:10.589578  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 18:06:10.589637  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1105 18:06:10.616764  335586 provision.go:87] duration metric: took 890.5212ms to configureAuth
	I1105 18:06:10.616791  335586 ubuntu.go:193] setting minikube options for container-runtime
	I1105 18:06:10.617035  335586 config.go:182] Loaded profile config "ha-256890": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:06:10.617142  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m02
	I1105 18:06:10.633764  335586 main.go:141] libmachine: Using SSH client type: native
	I1105 18:06:10.634047  335586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33180 <nil> <nil>}
	I1105 18:06:10.634067  335586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:06:10.992910  335586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:06:10.992977  335586 machine.go:96] duration metric: took 4.929466036s to provisionDockerMachine
	I1105 18:06:10.993005  335586 start.go:293] postStartSetup for "ha-256890-m02" (driver="docker")
	I1105 18:06:10.993030  335586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:06:10.993117  335586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:06:10.993188  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m02
	I1105 18:06:11.013001  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890-m02/id_rsa Username:docker}
	I1105 18:06:11.175419  335586 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:06:11.196760  335586 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1105 18:06:11.196795  335586 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1105 18:06:11.196807  335586 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1105 18:06:11.196815  335586 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1105 18:06:11.196826  335586 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-279806/.minikube/addons for local assets ...
	I1105 18:06:11.196882  335586 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-279806/.minikube/files for local assets ...
	I1105 18:06:11.196957  335586 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem -> 2851882.pem in /etc/ssl/certs
	I1105 18:06:11.196965  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem -> /etc/ssl/certs/2851882.pem
	I1105 18:06:11.197071  335586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:06:11.219262  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem --> /etc/ssl/certs/2851882.pem (1708 bytes)
	I1105 18:06:11.291007  335586 start.go:296] duration metric: took 297.972857ms for postStartSetup
	I1105 18:06:11.291090  335586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 18:06:11.291151  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m02
	I1105 18:06:11.315825  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890-m02/id_rsa Username:docker}
	I1105 18:06:11.475196  335586 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1105 18:06:11.486583  335586 fix.go:56] duration metric: took 5.771053725s for fixHost
	I1105 18:06:11.486607  335586 start.go:83] releasing machines lock for "ha-256890-m02", held for 5.771108979s
	I1105 18:06:11.486677  335586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-256890-m02
	I1105 18:06:11.511921  335586 out.go:177] * Found network options:
	I1105 18:06:11.514982  335586 out.go:177]   - NO_PROXY=192.168.49.2
	W1105 18:06:11.518025  335586 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 18:06:11.518077  335586 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 18:06:11.518143  335586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:06:11.518200  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m02
	I1105 18:06:11.518461  335586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:06:11.518516  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m02
	I1105 18:06:11.560791  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890-m02/id_rsa Username:docker}
	I1105 18:06:11.563739  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890-m02/id_rsa Username:docker}
	I1105 18:06:12.024487  335586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 18:06:12.048114  335586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:06:12.090933  335586 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1105 18:06:12.091081  335586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:06:12.104442  335586 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1105 18:06:12.104525  335586 start.go:495] detecting cgroup driver to use...
	I1105 18:06:12.104574  335586 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1105 18:06:12.104700  335586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:06:12.135483  335586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:06:12.160474  335586 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:06:12.160550  335586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:06:12.191246  335586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:06:12.221496  335586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:06:12.521973  335586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:06:12.811941  335586 docker.go:233] disabling docker service ...
	I1105 18:06:12.812013  335586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:06:12.869565  335586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:06:12.909936  335586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:06:13.146961  335586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:06:13.369658  335586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:06:13.433103  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:06:13.515380  335586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:06:13.515448  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:06:13.569128  335586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:06:13.569201  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:06:13.629218  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:06:13.674588  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:06:13.705284  335586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:06:13.745200  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:06:13.783387  335586 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:06:13.812493  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:06:13.853368  335586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:06:13.886862  335586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:06:13.922762  335586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:06:14.149804  335586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:06:14.551842  335586 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:06:14.551937  335586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:06:14.558221  335586 start.go:563] Will wait 60s for crictl version
	I1105 18:06:14.558284  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:06:14.565166  335586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:06:14.651022  335586 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1105 18:06:14.651187  335586 ssh_runner.go:195] Run: crio --version
	I1105 18:06:14.741859  335586 ssh_runner.go:195] Run: crio --version
	I1105 18:06:14.829198  335586 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1105 18:06:14.831932  335586 out.go:177]   - env NO_PROXY=192.168.49.2
	I1105 18:06:14.834700  335586 cli_runner.go:164] Run: docker network inspect ha-256890 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1105 18:06:14.869236  335586 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1105 18:06:14.873147  335586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:06:14.889462  335586 mustload.go:65] Loading cluster: ha-256890
	I1105 18:06:14.889694  335586 config.go:182] Loaded profile config "ha-256890": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:06:14.889939  335586 cli_runner.go:164] Run: docker container inspect ha-256890 --format={{.State.Status}}
	I1105 18:06:14.924706  335586 host.go:66] Checking if "ha-256890" exists ...
	I1105 18:06:14.924987  335586 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890 for IP: 192.168.49.3
	I1105 18:06:14.924995  335586 certs.go:194] generating shared ca certs ...
	I1105 18:06:14.925009  335586 certs.go:226] acquiring lock for ca certs: {Name:mk7e394808202081d7250bf8ad59a3f119279ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:06:14.925138  335586 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.key
	I1105 18:06:14.925178  335586 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.key
	I1105 18:06:14.925188  335586 certs.go:256] generating profile certs ...
	I1105 18:06:14.925271  335586 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/client.key
	I1105 18:06:14.925330  335586 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.key.d9376037
	I1105 18:06:14.925373  335586 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/proxy-client.key
	I1105 18:06:14.925382  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 18:06:14.925394  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 18:06:14.925404  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 18:06:14.925416  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 18:06:14.925427  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 18:06:14.925438  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 18:06:14.925449  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 18:06:14.925458  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 18:06:14.925508  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/285188.pem (1338 bytes)
	W1105 18:06:14.925536  335586 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-279806/.minikube/certs/285188_empty.pem, impossibly tiny 0 bytes
	I1105 18:06:14.925544  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 18:06:14.925569  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem (1078 bytes)
	I1105 18:06:14.925592  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:06:14.925612  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem (1679 bytes)
	I1105 18:06:14.925652  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem (1708 bytes)
	I1105 18:06:14.925679  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:06:14.925693  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/285188.pem -> /usr/share/ca-certificates/285188.pem
	I1105 18:06:14.925703  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem -> /usr/share/ca-certificates/2851882.pem
	I1105 18:06:14.925759  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890
	I1105 18:06:14.952832  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890/id_rsa Username:docker}
	I1105 18:06:15.064914  335586 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1105 18:06:15.072787  335586 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1105 18:06:15.096900  335586 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1105 18:06:15.116633  335586 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1105 18:06:15.149815  335586 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1105 18:06:15.156560  335586 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1105 18:06:15.188751  335586 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1105 18:06:15.194312  335586 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1105 18:06:15.214140  335586 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1105 18:06:15.219804  335586 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1105 18:06:15.254106  335586 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1105 18:06:15.260123  335586 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1105 18:06:15.276174  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:06:15.316801  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 18:06:15.354099  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:06:15.393852  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 18:06:15.437980  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1105 18:06:15.476157  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 18:06:15.517935  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:06:15.571314  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 18:06:15.609885  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:06:15.649056  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/certs/285188.pem --> /usr/share/ca-certificates/285188.pem (1338 bytes)
	I1105 18:06:15.689799  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem --> /usr/share/ca-certificates/2851882.pem (1708 bytes)
	I1105 18:06:15.726143  335586 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1105 18:06:15.749173  335586 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1105 18:06:15.787527  335586 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1105 18:06:15.814777  335586 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1105 18:06:15.849295  335586 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1105 18:06:15.879062  335586 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1105 18:06:15.908069  335586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1105 18:06:15.940567  335586 ssh_runner.go:195] Run: openssl version
	I1105 18:06:15.948390  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/285188.pem && ln -fs /usr/share/ca-certificates/285188.pem /etc/ssl/certs/285188.pem"
	I1105 18:06:15.958807  335586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/285188.pem
	I1105 18:06:15.963994  335586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:57 /usr/share/ca-certificates/285188.pem
	I1105 18:06:15.964088  335586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/285188.pem
	I1105 18:06:15.973076  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/285188.pem /etc/ssl/certs/51391683.0"
	I1105 18:06:15.986793  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2851882.pem && ln -fs /usr/share/ca-certificates/2851882.pem /etc/ssl/certs/2851882.pem"
	I1105 18:06:16.001670  335586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2851882.pem
	I1105 18:06:16.009021  335586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:57 /usr/share/ca-certificates/2851882.pem
	I1105 18:06:16.009100  335586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2851882.pem
	I1105 18:06:16.023496  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2851882.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:06:16.045897  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:06:16.060291  335586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:06:16.066887  335586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:47 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:06:16.066982  335586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:06:16.081300  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:06:16.093013  335586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:06:16.097342  335586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 18:06:16.108444  335586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 18:06:16.121443  335586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 18:06:16.133066  335586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 18:06:16.145762  335586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 18:06:16.165260  335586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 18:06:16.175148  335586 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.2 crio true true} ...
	I1105 18:06:16.175257  335586 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-256890-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-256890 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:06:16.175296  335586 kube-vip.go:115] generating kube-vip config ...
	I1105 18:06:16.175359  335586 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1105 18:06:16.202878  335586 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 18:06:16.202950  335586 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 18:06:16.203041  335586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:06:16.221479  335586 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 18:06:16.221573  335586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1105 18:06:16.237558  335586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1105 18:06:16.263332  335586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:06:16.284257  335586 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1105 18:06:16.309787  335586 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1105 18:06:16.315818  335586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:06:16.334090  335586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:06:16.507174  335586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:06:16.531914  335586 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:06:16.532215  335586 config.go:182] Loaded profile config "ha-256890": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:06:16.535183  335586 out.go:177] * Verifying Kubernetes components...
	I1105 18:06:16.537559  335586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:06:16.740477  335586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:06:16.755773  335586 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19910-279806/kubeconfig
	I1105 18:06:16.756074  335586 kapi.go:59] client config for ha-256890: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/client.crt", KeyFile:"/home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/client.key", CAFile:"/home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e9d0d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1105 18:06:16.756145  335586 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1105 18:06:16.756389  335586 node_ready.go:35] waiting up to 6m0s for node "ha-256890-m02" to be "Ready" ...
	I1105 18:06:16.756497  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:06:16.756517  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:16.756526  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:16.756531  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:29.191828  335586 round_trippers.go:574] Response Status: 500 Internal Server Error in 12435 milliseconds
	I1105 18:06:29.192038  335586 node_ready.go:53] error getting node "ha-256890-m02": etcdserver: request timed out
	I1105 18:06:29.192091  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:06:29.192097  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:29.192104  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:29.192108  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:37.340585  335586 round_trippers.go:574] Response Status: 500 Internal Server Error in 8148 milliseconds
	I1105 18:06:37.340927  335586 node_ready.go:53] error getting node "ha-256890-m02": etcdserver: leader changed
	I1105 18:06:37.341003  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:06:37.341012  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:37.341027  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:37.341032  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:37.359423  335586 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1105 18:06:37.361338  335586 node_ready.go:49] node "ha-256890-m02" has status "Ready":"True"
	I1105 18:06:37.361414  335586 node_ready.go:38] duration metric: took 20.605004621s for node "ha-256890-m02" to be "Ready" ...
	I1105 18:06:37.361443  335586 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:06:37.361528  335586 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1105 18:06:37.361584  335586 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1105 18:06:37.361703  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:37.361754  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:37.361787  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:37.361833  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:37.389745  335586 round_trippers.go:574] Response Status: 429 Too Many Requests in 27 milliseconds
	I1105 18:06:38.390232  335586 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:38.390287  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1105 18:06:38.390294  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:38.390303  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:38.390308  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:38.428191  335586 round_trippers.go:574] Response Status: 200 OK in 37 milliseconds
	I1105 18:06:38.447785  335586 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2lr9d" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:38.447884  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:06:38.447891  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:38.447900  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:38.447906  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:38.452226  335586 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:06:38.453350  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:06:38.453367  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:38.453377  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:38.453382  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:38.461770  335586 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1105 18:06:38.462807  335586 pod_ready.go:93] pod "coredns-7c65d6cfc9-2lr9d" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:38.462826  335586 pod_ready.go:82] duration metric: took 15.013944ms for pod "coredns-7c65d6cfc9-2lr9d" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:38.462839  335586 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mtrp9" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:38.462904  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mtrp9
	I1105 18:06:38.462909  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:38.462917  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:38.462920  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:38.465329  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:38.466473  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:06:38.466515  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:38.466555  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:38.466580  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:38.468795  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:38.469765  335586 pod_ready.go:93] pod "coredns-7c65d6cfc9-mtrp9" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:38.469815  335586 pod_ready.go:82] duration metric: took 6.967298ms for pod "coredns-7c65d6cfc9-mtrp9" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:38.469840  335586 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-256890" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:38.469951  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890
	I1105 18:06:38.469975  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:38.469999  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:38.470034  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:38.477958  335586 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1105 18:06:38.479156  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:06:38.479199  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:38.479239  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:38.479260  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:38.481432  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:38.482320  335586 pod_ready.go:93] pod "etcd-ha-256890" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:38.482369  335586 pod_ready.go:82] duration metric: took 12.489471ms for pod "etcd-ha-256890" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:38.482396  335586 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:38.482489  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m02
	I1105 18:06:38.482523  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:38.482546  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:38.482566  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:38.484798  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:38.485770  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:06:38.485809  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:38.485893  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:38.485931  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:38.489747  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:38.490672  335586 pod_ready.go:93] pod "etcd-ha-256890-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:38.490713  335586 pod_ready.go:82] duration metric: took 8.294606ms for pod "etcd-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:38.490753  335586 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:38.490848  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:06:38.490871  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:38.490911  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:38.490934  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:38.493196  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:38.590372  335586 request.go:632] Waited for 96.198661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:06:38.590483  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:06:38.590532  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:38.590559  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:38.590579  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:38.595875  335586 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:06:38.597005  335586 pod_ready.go:93] pod "etcd-ha-256890-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:38.597055  335586 pod_ready.go:82] duration metric: took 106.275354ms for pod "etcd-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:38.597107  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-256890" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:38.790349  335586 request.go:632] Waited for 193.143621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256890
	I1105 18:06:38.790456  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256890
	I1105 18:06:38.790513  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:38.790540  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:38.790559  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:38.793174  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:38.991047  335586 request.go:632] Waited for 197.107641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:06:38.991157  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:06:38.991193  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:38.991220  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:38.991239  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:38.994041  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:38.994675  335586 pod_ready.go:93] pod "kube-apiserver-ha-256890" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:38.994724  335586 pod_ready.go:82] duration metric: took 397.590032ms for pod "kube-apiserver-ha-256890" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:38.994751  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:39.190591  335586 request.go:632] Waited for 195.746781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256890-m02
	I1105 18:06:39.190702  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256890-m02
	I1105 18:06:39.190737  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:39.190766  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:39.190786  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:39.193507  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:39.390502  335586 request.go:632] Waited for 196.175566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:06:39.390607  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:06:39.390628  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:39.390697  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:39.390716  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:39.395258  335586 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:06:39.395881  335586 pod_ready.go:93] pod "kube-apiserver-ha-256890-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:39.395922  335586 pod_ready.go:82] duration metric: took 401.150732ms for pod "kube-apiserver-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:39.395955  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:39.590807  335586 request.go:632] Waited for 194.760791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256890-m03
	I1105 18:06:39.590913  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256890-m03
	I1105 18:06:39.590974  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:39.591001  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:39.591019  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:39.597205  335586 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 18:06:39.790234  335586 request.go:632] Waited for 192.183434ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:06:39.790292  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:06:39.790299  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:39.790307  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:39.790311  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:39.793132  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:39.793613  335586 pod_ready.go:93] pod "kube-apiserver-ha-256890-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:39.793626  335586 pod_ready.go:82] duration metric: took 397.650387ms for pod "kube-apiserver-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:39.793637  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-256890" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:39.991157  335586 request.go:632] Waited for 197.453791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256890
	I1105 18:06:39.991222  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256890
	I1105 18:06:39.991228  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:39.991237  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:39.991242  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:40.007555  335586 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1105 18:06:40.191138  335586 request.go:632] Waited for 181.278027ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:06:40.191251  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:06:40.191288  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:40.191316  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:40.191339  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:40.198260  335586 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 18:06:40.198894  335586 pod_ready.go:93] pod "kube-controller-manager-ha-256890" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:40.198944  335586 pod_ready.go:82] duration metric: took 405.298588ms for pod "kube-controller-manager-ha-256890" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:40.198972  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:40.390352  335586 request.go:632] Waited for 191.239733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256890-m02
	I1105 18:06:40.390472  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256890-m02
	I1105 18:06:40.390506  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:40.390536  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:40.390555  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:40.393690  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:40.590235  335586 request.go:632] Waited for 195.248898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:06:40.590380  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:06:40.590416  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:40.590446  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:40.590467  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:40.593142  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:40.593740  335586 pod_ready.go:93] pod "kube-controller-manager-ha-256890-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:40.593777  335586 pod_ready.go:82] duration metric: took 394.782934ms for pod "kube-controller-manager-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:40.593817  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:40.791178  335586 request.go:632] Waited for 197.276911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256890-m03
	I1105 18:06:40.791239  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256890-m03
	I1105 18:06:40.791248  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:40.791257  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:40.791262  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:40.794238  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:40.990307  335586 request.go:632] Waited for 195.294469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:06:40.990412  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:06:40.990470  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:40.990486  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:40.990493  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:40.993565  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:40.994170  335586 pod_ready.go:93] pod "kube-controller-manager-ha-256890-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:40.994191  335586 pod_ready.go:82] duration metric: took 400.349356ms for pod "kube-controller-manager-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:40.994203  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8wk8p" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:41.191098  335586 request.go:632] Waited for 196.814762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8wk8p
	I1105 18:06:41.191158  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8wk8p
	I1105 18:06:41.191169  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:41.191178  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:41.191186  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:41.194129  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:41.391119  335586 request.go:632] Waited for 196.280711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:06:41.391184  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:06:41.391196  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:41.391208  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:41.391217  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:41.394018  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:41.394647  335586 pod_ready.go:93] pod "kube-proxy-8wk8p" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:41.394667  335586 pod_ready.go:82] duration metric: took 400.452615ms for pod "kube-proxy-8wk8p" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:41.394681  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8xxrt" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:41.590872  335586 request.go:632] Waited for 196.122213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xxrt
	I1105 18:06:41.590954  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xxrt
	I1105 18:06:41.590964  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:41.590974  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:41.590984  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:41.594092  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:41.791270  335586 request.go:632] Waited for 196.33005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:06:41.791343  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:06:41.791352  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:41.791361  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:41.791373  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:41.795935  335586 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:06:41.796577  335586 pod_ready.go:93] pod "kube-proxy-8xxrt" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:41.796599  335586 pod_ready.go:82] duration metric: took 401.908588ms for pod "kube-proxy-8xxrt" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:41.796626  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bvn86" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:41.990376  335586 request.go:632] Waited for 192.255818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvn86
	I1105 18:06:41.990437  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvn86
	I1105 18:06:41.990449  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:41.990463  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:41.990470  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:41.993502  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:42.190321  335586 request.go:632] Waited for 196.178023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:06:42.190462  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:06:42.190475  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:42.190483  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:42.190487  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:42.193422  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:42.193974  335586 pod_ready.go:93] pod "kube-proxy-bvn86" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:42.193996  335586 pod_ready.go:82] duration metric: took 397.357758ms for pod "kube-proxy-bvn86" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:42.194009  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fkfkc" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:42.390960  335586 request.go:632] Waited for 196.851872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkfkc
	I1105 18:06:42.391022  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkfkc
	I1105 18:06:42.391028  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:42.391038  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:42.391073  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:42.393856  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:42.590895  335586 request.go:632] Waited for 196.287791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:06:42.591004  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:06:42.591062  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:42.591080  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:42.591086  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:42.594525  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:06:42.595320  335586 pod_ready.go:93] pod "kube-proxy-fkfkc" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:42.595340  335586 pod_ready.go:82] duration metric: took 401.3181ms for pod "kube-proxy-fkfkc" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:42.595353  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-256890" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:42.790368  335586 request.go:632] Waited for 194.942104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256890
	I1105 18:06:42.790461  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256890
	I1105 18:06:42.790474  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:42.790483  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:42.790515  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:42.793265  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:42.991251  335586 request.go:632] Waited for 197.337953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:06:42.991359  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:06:42.991404  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:42.991421  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:42.991427  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:42.994216  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:42.994849  335586 pod_ready.go:93] pod "kube-scheduler-ha-256890" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:42.994868  335586 pod_ready.go:82] duration metric: took 399.508476ms for pod "kube-scheduler-ha-256890" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:42.994882  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:43.191256  335586 request.go:632] Waited for 196.281792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256890-m02
	I1105 18:06:43.191348  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256890-m02
	I1105 18:06:43.191358  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:43.191367  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:43.191372  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:43.194149  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:43.391083  335586 request.go:632] Waited for 196.347244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:06:43.391139  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:06:43.391145  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:43.391154  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:43.391201  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:43.394012  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:06:43.394902  335586 pod_ready.go:93] pod "kube-scheduler-ha-256890-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:43.394928  335586 pod_ready.go:82] duration metric: took 400.012811ms for pod "kube-scheduler-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:43.394941  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:43.590774  335586 request.go:632] Waited for 195.759697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256890-m03
	I1105 18:06:43.590940  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256890-m03
	I1105 18:06:43.590977  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:43.591002  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:43.591024  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:43.596346  335586 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:06:43.790440  335586 request.go:632] Waited for 193.167999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:06:43.790544  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:06:43.790565  335586 round_trippers.go:469] Request Headers:
	I1105 18:06:43.790598  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:06:43.790623  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:06:43.796763  335586 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 18:06:43.797856  335586 pod_ready.go:93] pod "kube-scheduler-ha-256890-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:06:43.797877  335586 pod_ready.go:82] duration metric: took 402.927326ms for pod "kube-scheduler-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:06:43.797892  335586 pod_ready.go:39] duration metric: took 6.436423778s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:06:43.797906  335586 api_server.go:52] waiting for apiserver process to appear ...
	I1105 18:06:43.797970  335586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:06:43.821740  335586 api_server.go:72] duration metric: took 27.289770808s to wait for apiserver process to appear ...
	I1105 18:06:43.821768  335586 api_server.go:88] waiting for apiserver healthz status ...
	I1105 18:06:43.821809  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:43.832955  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:43.832994  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:44.322647  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:44.330372  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:44.330402  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:44.821910  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:44.829439  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:44.829480  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:45.321983  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:45.329897  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:45.329970  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:45.822476  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:45.829917  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:45.829994  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:46.322633  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:46.330931  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:46.330960  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:46.822607  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:46.830198  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:46.830227  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:47.321859  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:47.329453  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:47.329487  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:47.822236  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:47.829811  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:47.829838  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:48.321933  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:48.331007  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:48.331037  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:48.822474  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:48.832790  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:48.832832  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:49.322475  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:49.330979  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:49.331016  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:49.822650  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:49.830402  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:49.830444  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:50.321927  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:50.329426  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:50.329455  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:50.822084  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:50.830947  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:50.830985  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:51.322374  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:51.514711  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:51.514747  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:51.821947  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:51.834875  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:51.834908  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:52.322239  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:52.330635  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:52.330673  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:52.822284  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:52.831717  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:52.831753  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:53.321915  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:53.331019  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:53.331045  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:53.822431  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:53.829868  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:53.829909  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:54.322509  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:54.330164  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:54.330204  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:54.821828  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:54.829355  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:54.829391  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:55.322008  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:55.329765  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:55.329802  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:55.822572  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:55.830609  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:55.830634  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:56.322211  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:56.329661  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:56.329691  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:56.821910  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:56.829554  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:56.829587  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:57.321870  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:57.329680  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:57.329714  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:57.822664  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:57.830567  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:57.830647  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:58.322183  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:58.329690  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:58.329721  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:58.822215  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:58.829920  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:58.829948  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:59.322515  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:59.329905  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:59.329929  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:06:59.822380  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:06:59.829892  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:06:59.829921  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:00.322473  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:00.330024  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:00.330056  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:00.822719  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:00.830521  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:00.830553  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:01.322056  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:01.329780  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:01.329810  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:01.822450  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:01.831403  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:01.831435  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:02.321909  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:02.329583  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:02.329610  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:02.822230  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:02.829902  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:02.829936  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:03.322606  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:03.330310  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:03.330343  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:03.821844  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:03.829188  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:03.829213  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:04.322778  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:04.330933  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:04.330962  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:04.822727  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:04.845070  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:04.845106  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:05.322521  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:05.330109  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:05.330140  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:05.822888  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:05.830645  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:05.830677  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:06.321848  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:06.329566  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:06.329591  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:06.821992  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:06.829888  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:06.829920  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:07.322310  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:07.329811  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:07.329889  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:07.822764  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:07.830498  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:07.830531  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:08.321928  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:08.329449  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:08.329474  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:08.821975  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:08.829586  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:08.829615  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:09.322138  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:09.329727  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:09.329759  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:09.822270  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:09.829658  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:09.829687  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:10.321930  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:10.329541  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:10.329580  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:10.822070  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:10.829696  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:10.829721  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:11.321919  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:11.329834  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:11.329874  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:11.822390  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:11.829819  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:11.829847  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:12.322541  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:12.331490  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:12.331524  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:12.822315  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:12.829740  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:12.829767  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:13.322273  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:13.329812  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:13.329839  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:13.822030  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:13.829685  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:13.829715  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:14.322328  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:14.330283  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:14.330311  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:14.821830  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:14.829397  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:14.829426  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:15.322119  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:15.329597  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:15.329630  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:15.822092  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:15.831020  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:15.831049  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:16.322673  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:16.330398  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:16.330433  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:16.822027  335586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 18:07:16.822222  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 18:07:16.869643  335586 cri.go:89] found id: "d2a1e08dad656f007bd115ff1367115916de919a1629a8bbc63873246317a559"
	I1105 18:07:16.869666  335586 cri.go:89] found id: "5e7fa3d8dfecef5c9e5cb3d045abad02cd41f82db8fde17ff8b415ee435a353a"
	I1105 18:07:16.869671  335586 cri.go:89] found id: ""
	I1105 18:07:16.869678  335586 logs.go:282] 2 containers: [d2a1e08dad656f007bd115ff1367115916de919a1629a8bbc63873246317a559 5e7fa3d8dfecef5c9e5cb3d045abad02cd41f82db8fde17ff8b415ee435a353a]
	I1105 18:07:16.869736  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:16.873083  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:16.876175  335586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 18:07:16.876238  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 18:07:16.914112  335586 cri.go:89] found id: "db3c579c0ce6ab32850921c70b596c3aa797ca14d49abdb0f838dd3c42ead97f"
	I1105 18:07:16.914140  335586 cri.go:89] found id: "3d3d8ff7895279eb937ae03fc6712b29e8992d514c8ca436cfa74160aae0bda7"
	I1105 18:07:16.914145  335586 cri.go:89] found id: ""
	I1105 18:07:16.914152  335586 logs.go:282] 2 containers: [db3c579c0ce6ab32850921c70b596c3aa797ca14d49abdb0f838dd3c42ead97f 3d3d8ff7895279eb937ae03fc6712b29e8992d514c8ca436cfa74160aae0bda7]
	I1105 18:07:16.914208  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:16.919236  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:16.923792  335586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 18:07:16.923859  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 18:07:16.958511  335586 cri.go:89] found id: ""
	I1105 18:07:16.958582  335586 logs.go:282] 0 containers: []
	W1105 18:07:16.958606  335586 logs.go:284] No container was found matching "coredns"
	I1105 18:07:16.958628  335586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 18:07:16.958722  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 18:07:16.993720  335586 cri.go:89] found id: "90c45e5d84959ac997d6d8ffc79400c297cf8eb3f4d3675721a3cdf66b5acc65"
	I1105 18:07:16.993781  335586 cri.go:89] found id: "0c1599f06b63e4d081f92cbaf4356bf939114008469df204579447099cc8c870"
	I1105 18:07:16.993791  335586 cri.go:89] found id: ""
	I1105 18:07:16.993799  335586 logs.go:282] 2 containers: [90c45e5d84959ac997d6d8ffc79400c297cf8eb3f4d3675721a3cdf66b5acc65 0c1599f06b63e4d081f92cbaf4356bf939114008469df204579447099cc8c870]
	I1105 18:07:16.993862  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:16.997445  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:17.000896  335586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 18:07:17.000967  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 18:07:17.047261  335586 cri.go:89] found id: "9ff3e7fd7efb634b0d9f27e415a4fa2064a1f1b89c9870862db5aed94d00e6f3"
	I1105 18:07:17.047288  335586 cri.go:89] found id: ""
	I1105 18:07:17.047296  335586 logs.go:282] 1 containers: [9ff3e7fd7efb634b0d9f27e415a4fa2064a1f1b89c9870862db5aed94d00e6f3]
	I1105 18:07:17.047352  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:17.051113  335586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 18:07:17.051192  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 18:07:17.087607  335586 cri.go:89] found id: "3eeef12287acf0ec77f5b0ed154d40a0ca198c980b8ba63687aba3a49b6e1db8"
	I1105 18:07:17.087629  335586 cri.go:89] found id: "029d0545a4e75aa6c3382f0a3deee88e40567ec8a56348ccbfeca6a2d277b71a"
	I1105 18:07:17.087634  335586 cri.go:89] found id: ""
	I1105 18:07:17.087641  335586 logs.go:282] 2 containers: [3eeef12287acf0ec77f5b0ed154d40a0ca198c980b8ba63687aba3a49b6e1db8 029d0545a4e75aa6c3382f0a3deee88e40567ec8a56348ccbfeca6a2d277b71a]
	I1105 18:07:17.087698  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:17.092562  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:17.096803  335586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 18:07:17.096884  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 18:07:17.139778  335586 cri.go:89] found id: "0cae484058955f00729ce706d42251413f46cca152e87b3f109dec55a64c93a0"
	I1105 18:07:17.139799  335586 cri.go:89] found id: ""
	I1105 18:07:17.139807  335586 logs.go:282] 1 containers: [0cae484058955f00729ce706d42251413f46cca152e87b3f109dec55a64c93a0]
	I1105 18:07:17.139866  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:17.143484  335586 logs.go:123] Gathering logs for kubelet ...
	I1105 18:07:17.143509  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 18:07:17.224214  335586 logs.go:123] Gathering logs for etcd [db3c579c0ce6ab32850921c70b596c3aa797ca14d49abdb0f838dd3c42ead97f] ...
	I1105 18:07:17.224251  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db3c579c0ce6ab32850921c70b596c3aa797ca14d49abdb0f838dd3c42ead97f"
	I1105 18:07:17.286695  335586 logs.go:123] Gathering logs for kube-proxy [9ff3e7fd7efb634b0d9f27e415a4fa2064a1f1b89c9870862db5aed94d00e6f3] ...
	I1105 18:07:17.286744  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ff3e7fd7efb634b0d9f27e415a4fa2064a1f1b89c9870862db5aed94d00e6f3"
	I1105 18:07:17.326427  335586 logs.go:123] Gathering logs for kube-controller-manager [029d0545a4e75aa6c3382f0a3deee88e40567ec8a56348ccbfeca6a2d277b71a] ...
	I1105 18:07:17.326459  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 029d0545a4e75aa6c3382f0a3deee88e40567ec8a56348ccbfeca6a2d277b71a"
	I1105 18:07:17.361766  335586 logs.go:123] Gathering logs for etcd [3d3d8ff7895279eb937ae03fc6712b29e8992d514c8ca436cfa74160aae0bda7] ...
	I1105 18:07:17.361794  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d3d8ff7895279eb937ae03fc6712b29e8992d514c8ca436cfa74160aae0bda7"
	I1105 18:07:17.417283  335586 logs.go:123] Gathering logs for container status ...
	I1105 18:07:17.417323  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 18:07:17.471675  335586 logs.go:123] Gathering logs for kindnet [0cae484058955f00729ce706d42251413f46cca152e87b3f109dec55a64c93a0] ...
	I1105 18:07:17.471713  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0cae484058955f00729ce706d42251413f46cca152e87b3f109dec55a64c93a0"
	I1105 18:07:17.507921  335586 logs.go:123] Gathering logs for CRI-O ...
	I1105 18:07:17.507950  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 18:07:17.576433  335586 logs.go:123] Gathering logs for kube-apiserver [d2a1e08dad656f007bd115ff1367115916de919a1629a8bbc63873246317a559] ...
	I1105 18:07:17.576469  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2a1e08dad656f007bd115ff1367115916de919a1629a8bbc63873246317a559"
	I1105 18:07:17.625927  335586 logs.go:123] Gathering logs for kube-scheduler [90c45e5d84959ac997d6d8ffc79400c297cf8eb3f4d3675721a3cdf66b5acc65] ...
	I1105 18:07:17.625956  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90c45e5d84959ac997d6d8ffc79400c297cf8eb3f4d3675721a3cdf66b5acc65"
	I1105 18:07:17.702983  335586 logs.go:123] Gathering logs for kube-scheduler [0c1599f06b63e4d081f92cbaf4356bf939114008469df204579447099cc8c870] ...
	I1105 18:07:17.703018  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c1599f06b63e4d081f92cbaf4356bf939114008469df204579447099cc8c870"
	I1105 18:07:17.751755  335586 logs.go:123] Gathering logs for kube-controller-manager [3eeef12287acf0ec77f5b0ed154d40a0ca198c980b8ba63687aba3a49b6e1db8] ...
	I1105 18:07:17.751783  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eeef12287acf0ec77f5b0ed154d40a0ca198c980b8ba63687aba3a49b6e1db8"
	I1105 18:07:17.820325  335586 logs.go:123] Gathering logs for dmesg ...
	I1105 18:07:17.820359  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 18:07:17.846750  335586 logs.go:123] Gathering logs for describe nodes ...
	I1105 18:07:17.846780  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 18:07:18.621704  335586 logs.go:123] Gathering logs for kube-apiserver [5e7fa3d8dfecef5c9e5cb3d045abad02cd41f82db8fde17ff8b415ee435a353a] ...
	I1105 18:07:18.621780  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7fa3d8dfecef5c9e5cb3d045abad02cd41f82db8fde17ff8b415ee435a353a"
	I1105 18:07:21.195127  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:21.203039  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1105 18:07:21.203075  335586 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1105 18:07:21.203102  335586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 18:07:21.203165  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 18:07:21.242718  335586 cri.go:89] found id: "d2a1e08dad656f007bd115ff1367115916de919a1629a8bbc63873246317a559"
	I1105 18:07:21.242739  335586 cri.go:89] found id: "5e7fa3d8dfecef5c9e5cb3d045abad02cd41f82db8fde17ff8b415ee435a353a"
	I1105 18:07:21.242744  335586 cri.go:89] found id: ""
	I1105 18:07:21.242752  335586 logs.go:282] 2 containers: [d2a1e08dad656f007bd115ff1367115916de919a1629a8bbc63873246317a559 5e7fa3d8dfecef5c9e5cb3d045abad02cd41f82db8fde17ff8b415ee435a353a]
	I1105 18:07:21.242819  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:21.246217  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:21.249558  335586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 18:07:21.249634  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 18:07:21.284859  335586 cri.go:89] found id: "db3c579c0ce6ab32850921c70b596c3aa797ca14d49abdb0f838dd3c42ead97f"
	I1105 18:07:21.284940  335586 cri.go:89] found id: "3d3d8ff7895279eb937ae03fc6712b29e8992d514c8ca436cfa74160aae0bda7"
	I1105 18:07:21.284955  335586 cri.go:89] found id: ""
	I1105 18:07:21.284963  335586 logs.go:282] 2 containers: [db3c579c0ce6ab32850921c70b596c3aa797ca14d49abdb0f838dd3c42ead97f 3d3d8ff7895279eb937ae03fc6712b29e8992d514c8ca436cfa74160aae0bda7]
	I1105 18:07:21.285035  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:21.288420  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:21.292276  335586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 18:07:21.292343  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 18:07:21.337722  335586 cri.go:89] found id: ""
	I1105 18:07:21.337746  335586 logs.go:282] 0 containers: []
	W1105 18:07:21.337755  335586 logs.go:284] No container was found matching "coredns"
	I1105 18:07:21.337762  335586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 18:07:21.337821  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 18:07:21.378080  335586 cri.go:89] found id: "90c45e5d84959ac997d6d8ffc79400c297cf8eb3f4d3675721a3cdf66b5acc65"
	I1105 18:07:21.378104  335586 cri.go:89] found id: "0c1599f06b63e4d081f92cbaf4356bf939114008469df204579447099cc8c870"
	I1105 18:07:21.378110  335586 cri.go:89] found id: ""
	I1105 18:07:21.378118  335586 logs.go:282] 2 containers: [90c45e5d84959ac997d6d8ffc79400c297cf8eb3f4d3675721a3cdf66b5acc65 0c1599f06b63e4d081f92cbaf4356bf939114008469df204579447099cc8c870]
	I1105 18:07:21.378173  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:21.381564  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:21.384864  335586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 18:07:21.384963  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 18:07:21.423215  335586 cri.go:89] found id: "9ff3e7fd7efb634b0d9f27e415a4fa2064a1f1b89c9870862db5aed94d00e6f3"
	I1105 18:07:21.423277  335586 cri.go:89] found id: ""
	I1105 18:07:21.423300  335586 logs.go:282] 1 containers: [9ff3e7fd7efb634b0d9f27e415a4fa2064a1f1b89c9870862db5aed94d00e6f3]
	I1105 18:07:21.423367  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:21.426665  335586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 18:07:21.426739  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 18:07:21.462867  335586 cri.go:89] found id: "3eeef12287acf0ec77f5b0ed154d40a0ca198c980b8ba63687aba3a49b6e1db8"
	I1105 18:07:21.462889  335586 cri.go:89] found id: "029d0545a4e75aa6c3382f0a3deee88e40567ec8a56348ccbfeca6a2d277b71a"
	I1105 18:07:21.462895  335586 cri.go:89] found id: ""
	I1105 18:07:21.462902  335586 logs.go:282] 2 containers: [3eeef12287acf0ec77f5b0ed154d40a0ca198c980b8ba63687aba3a49b6e1db8 029d0545a4e75aa6c3382f0a3deee88e40567ec8a56348ccbfeca6a2d277b71a]
	I1105 18:07:21.462973  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:21.466575  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:21.469993  335586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 18:07:21.470117  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 18:07:21.506345  335586 cri.go:89] found id: "0cae484058955f00729ce706d42251413f46cca152e87b3f109dec55a64c93a0"
	I1105 18:07:21.506374  335586 cri.go:89] found id: ""
	I1105 18:07:21.506383  335586 logs.go:282] 1 containers: [0cae484058955f00729ce706d42251413f46cca152e87b3f109dec55a64c93a0]
	I1105 18:07:21.506460  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:21.509966  335586 logs.go:123] Gathering logs for etcd [3d3d8ff7895279eb937ae03fc6712b29e8992d514c8ca436cfa74160aae0bda7] ...
	I1105 18:07:21.509992  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d3d8ff7895279eb937ae03fc6712b29e8992d514c8ca436cfa74160aae0bda7"
	I1105 18:07:21.572095  335586 logs.go:123] Gathering logs for kube-scheduler [90c45e5d84959ac997d6d8ffc79400c297cf8eb3f4d3675721a3cdf66b5acc65] ...
	I1105 18:07:21.572135  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90c45e5d84959ac997d6d8ffc79400c297cf8eb3f4d3675721a3cdf66b5acc65"
	I1105 18:07:21.650431  335586 logs.go:123] Gathering logs for kube-scheduler [0c1599f06b63e4d081f92cbaf4356bf939114008469df204579447099cc8c870] ...
	I1105 18:07:21.650473  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c1599f06b63e4d081f92cbaf4356bf939114008469df204579447099cc8c870"
	I1105 18:07:21.718768  335586 logs.go:123] Gathering logs for kube-controller-manager [029d0545a4e75aa6c3382f0a3deee88e40567ec8a56348ccbfeca6a2d277b71a] ...
	I1105 18:07:21.718799  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 029d0545a4e75aa6c3382f0a3deee88e40567ec8a56348ccbfeca6a2d277b71a"
	I1105 18:07:21.768661  335586 logs.go:123] Gathering logs for CRI-O ...
	I1105 18:07:21.768692  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 18:07:21.855924  335586 logs.go:123] Gathering logs for kube-apiserver [d2a1e08dad656f007bd115ff1367115916de919a1629a8bbc63873246317a559] ...
	I1105 18:07:21.855961  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2a1e08dad656f007bd115ff1367115916de919a1629a8bbc63873246317a559"
	I1105 18:07:21.941986  335586 logs.go:123] Gathering logs for etcd [db3c579c0ce6ab32850921c70b596c3aa797ca14d49abdb0f838dd3c42ead97f] ...
	I1105 18:07:21.942020  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db3c579c0ce6ab32850921c70b596c3aa797ca14d49abdb0f838dd3c42ead97f"
	I1105 18:07:22.069379  335586 logs.go:123] Gathering logs for kube-apiserver [5e7fa3d8dfecef5c9e5cb3d045abad02cd41f82db8fde17ff8b415ee435a353a] ...
	I1105 18:07:22.069415  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7fa3d8dfecef5c9e5cb3d045abad02cd41f82db8fde17ff8b415ee435a353a"
	I1105 18:07:22.174685  335586 logs.go:123] Gathering logs for kube-proxy [9ff3e7fd7efb634b0d9f27e415a4fa2064a1f1b89c9870862db5aed94d00e6f3] ...
	I1105 18:07:22.174723  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ff3e7fd7efb634b0d9f27e415a4fa2064a1f1b89c9870862db5aed94d00e6f3"
	I1105 18:07:22.252160  335586 logs.go:123] Gathering logs for kubelet ...
	I1105 18:07:22.252188  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 18:07:22.374564  335586 logs.go:123] Gathering logs for describe nodes ...
	I1105 18:07:22.374604  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 18:07:22.658996  335586 logs.go:123] Gathering logs for kube-controller-manager [3eeef12287acf0ec77f5b0ed154d40a0ca198c980b8ba63687aba3a49b6e1db8] ...
	I1105 18:07:22.659031  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eeef12287acf0ec77f5b0ed154d40a0ca198c980b8ba63687aba3a49b6e1db8"
	I1105 18:07:22.718529  335586 logs.go:123] Gathering logs for kindnet [0cae484058955f00729ce706d42251413f46cca152e87b3f109dec55a64c93a0] ...
	I1105 18:07:22.718569  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0cae484058955f00729ce706d42251413f46cca152e87b3f109dec55a64c93a0"
	I1105 18:07:22.759264  335586 logs.go:123] Gathering logs for dmesg ...
	I1105 18:07:22.759297  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 18:07:22.787270  335586 logs.go:123] Gathering logs for container status ...
	I1105 18:07:22.787303  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 18:07:25.352480  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:07:25.362597  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1105 18:07:25.362713  335586 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I1105 18:07:25.362730  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:25.362751  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:25.362756  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:25.376036  335586 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1105 18:07:25.376200  335586 api_server.go:141] control plane version: v1.31.2
	I1105 18:07:25.376220  335586 api_server.go:131] duration metric: took 41.554445486s to wait for apiserver health ...
	I1105 18:07:25.376229  335586 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 18:07:25.376255  335586 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 18:07:25.376321  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 18:07:25.413179  335586 cri.go:89] found id: "d2a1e08dad656f007bd115ff1367115916de919a1629a8bbc63873246317a559"
	I1105 18:07:25.413259  335586 cri.go:89] found id: "5e7fa3d8dfecef5c9e5cb3d045abad02cd41f82db8fde17ff8b415ee435a353a"
	I1105 18:07:25.413279  335586 cri.go:89] found id: ""
	I1105 18:07:25.413303  335586 logs.go:282] 2 containers: [d2a1e08dad656f007bd115ff1367115916de919a1629a8bbc63873246317a559 5e7fa3d8dfecef5c9e5cb3d045abad02cd41f82db8fde17ff8b415ee435a353a]
	I1105 18:07:25.413389  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:25.417227  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:25.420875  335586 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 18:07:25.421000  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 18:07:25.457957  335586 cri.go:89] found id: "db3c579c0ce6ab32850921c70b596c3aa797ca14d49abdb0f838dd3c42ead97f"
	I1105 18:07:25.458026  335586 cri.go:89] found id: "3d3d8ff7895279eb937ae03fc6712b29e8992d514c8ca436cfa74160aae0bda7"
	I1105 18:07:25.458044  335586 cri.go:89] found id: ""
	I1105 18:07:25.458067  335586 logs.go:282] 2 containers: [db3c579c0ce6ab32850921c70b596c3aa797ca14d49abdb0f838dd3c42ead97f 3d3d8ff7895279eb937ae03fc6712b29e8992d514c8ca436cfa74160aae0bda7]
	I1105 18:07:25.458156  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:25.461934  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:25.465299  335586 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 18:07:25.465372  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 18:07:25.502551  335586 cri.go:89] found id: ""
	I1105 18:07:25.502574  335586 logs.go:282] 0 containers: []
	W1105 18:07:25.502582  335586 logs.go:284] No container was found matching "coredns"
	I1105 18:07:25.502589  335586 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 18:07:25.502647  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 18:07:25.540235  335586 cri.go:89] found id: "90c45e5d84959ac997d6d8ffc79400c297cf8eb3f4d3675721a3cdf66b5acc65"
	I1105 18:07:25.540257  335586 cri.go:89] found id: "0c1599f06b63e4d081f92cbaf4356bf939114008469df204579447099cc8c870"
	I1105 18:07:25.540262  335586 cri.go:89] found id: ""
	I1105 18:07:25.540270  335586 logs.go:282] 2 containers: [90c45e5d84959ac997d6d8ffc79400c297cf8eb3f4d3675721a3cdf66b5acc65 0c1599f06b63e4d081f92cbaf4356bf939114008469df204579447099cc8c870]
	I1105 18:07:25.540335  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:25.543999  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:25.547328  335586 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 18:07:25.547399  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 18:07:25.594836  335586 cri.go:89] found id: "9ff3e7fd7efb634b0d9f27e415a4fa2064a1f1b89c9870862db5aed94d00e6f3"
	I1105 18:07:25.594862  335586 cri.go:89] found id: ""
	I1105 18:07:25.594870  335586 logs.go:282] 1 containers: [9ff3e7fd7efb634b0d9f27e415a4fa2064a1f1b89c9870862db5aed94d00e6f3]
	I1105 18:07:25.594930  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:25.598931  335586 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 18:07:25.599002  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 18:07:25.642850  335586 cri.go:89] found id: "3eeef12287acf0ec77f5b0ed154d40a0ca198c980b8ba63687aba3a49b6e1db8"
	I1105 18:07:25.642873  335586 cri.go:89] found id: "029d0545a4e75aa6c3382f0a3deee88e40567ec8a56348ccbfeca6a2d277b71a"
	I1105 18:07:25.642879  335586 cri.go:89] found id: ""
	I1105 18:07:25.642887  335586 logs.go:282] 2 containers: [3eeef12287acf0ec77f5b0ed154d40a0ca198c980b8ba63687aba3a49b6e1db8 029d0545a4e75aa6c3382f0a3deee88e40567ec8a56348ccbfeca6a2d277b71a]
	I1105 18:07:25.642945  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:25.646600  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:25.649987  335586 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 18:07:25.650059  335586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 18:07:25.691667  335586 cri.go:89] found id: "0cae484058955f00729ce706d42251413f46cca152e87b3f109dec55a64c93a0"
	I1105 18:07:25.691731  335586 cri.go:89] found id: ""
	I1105 18:07:25.691754  335586 logs.go:282] 1 containers: [0cae484058955f00729ce706d42251413f46cca152e87b3f109dec55a64c93a0]
	I1105 18:07:25.691841  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:25.695924  335586 logs.go:123] Gathering logs for kube-scheduler [0c1599f06b63e4d081f92cbaf4356bf939114008469df204579447099cc8c870] ...
	I1105 18:07:25.695996  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c1599f06b63e4d081f92cbaf4356bf939114008469df204579447099cc8c870"
	I1105 18:07:25.734953  335586 logs.go:123] Gathering logs for container status ...
	I1105 18:07:25.734995  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 18:07:25.779334  335586 logs.go:123] Gathering logs for etcd [db3c579c0ce6ab32850921c70b596c3aa797ca14d49abdb0f838dd3c42ead97f] ...
	I1105 18:07:25.779367  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db3c579c0ce6ab32850921c70b596c3aa797ca14d49abdb0f838dd3c42ead97f"
	I1105 18:07:25.847065  335586 logs.go:123] Gathering logs for kube-scheduler [90c45e5d84959ac997d6d8ffc79400c297cf8eb3f4d3675721a3cdf66b5acc65] ...
	I1105 18:07:25.847099  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90c45e5d84959ac997d6d8ffc79400c297cf8eb3f4d3675721a3cdf66b5acc65"
	I1105 18:07:25.909036  335586 logs.go:123] Gathering logs for kube-controller-manager [3eeef12287acf0ec77f5b0ed154d40a0ca198c980b8ba63687aba3a49b6e1db8] ...
	I1105 18:07:25.909075  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3eeef12287acf0ec77f5b0ed154d40a0ca198c980b8ba63687aba3a49b6e1db8"
	I1105 18:07:25.976178  335586 logs.go:123] Gathering logs for kube-controller-manager [029d0545a4e75aa6c3382f0a3deee88e40567ec8a56348ccbfeca6a2d277b71a] ...
	I1105 18:07:25.976215  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 029d0545a4e75aa6c3382f0a3deee88e40567ec8a56348ccbfeca6a2d277b71a"
	I1105 18:07:26.028015  335586 logs.go:123] Gathering logs for describe nodes ...
	I1105 18:07:26.028044  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 18:07:26.431814  335586 logs.go:123] Gathering logs for kube-apiserver [d2a1e08dad656f007bd115ff1367115916de919a1629a8bbc63873246317a559] ...
	I1105 18:07:26.431854  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2a1e08dad656f007bd115ff1367115916de919a1629a8bbc63873246317a559"
	I1105 18:07:26.506494  335586 logs.go:123] Gathering logs for CRI-O ...
	I1105 18:07:26.506530  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 18:07:26.583429  335586 logs.go:123] Gathering logs for kubelet ...
	I1105 18:07:26.583474  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 18:07:26.699577  335586 logs.go:123] Gathering logs for kube-apiserver [5e7fa3d8dfecef5c9e5cb3d045abad02cd41f82db8fde17ff8b415ee435a353a] ...
	I1105 18:07:26.699618  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e7fa3d8dfecef5c9e5cb3d045abad02cd41f82db8fde17ff8b415ee435a353a"
	I1105 18:07:26.772249  335586 logs.go:123] Gathering logs for kube-proxy [9ff3e7fd7efb634b0d9f27e415a4fa2064a1f1b89c9870862db5aed94d00e6f3] ...
	I1105 18:07:26.772281  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ff3e7fd7efb634b0d9f27e415a4fa2064a1f1b89c9870862db5aed94d00e6f3"
	I1105 18:07:26.841536  335586 logs.go:123] Gathering logs for kindnet [0cae484058955f00729ce706d42251413f46cca152e87b3f109dec55a64c93a0] ...
	I1105 18:07:26.841609  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0cae484058955f00729ce706d42251413f46cca152e87b3f109dec55a64c93a0"
	I1105 18:07:26.899390  335586 logs.go:123] Gathering logs for dmesg ...
	I1105 18:07:26.899530  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 18:07:26.918387  335586 logs.go:123] Gathering logs for etcd [3d3d8ff7895279eb937ae03fc6712b29e8992d514c8ca436cfa74160aae0bda7] ...
	I1105 18:07:26.918474  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d3d8ff7895279eb937ae03fc6712b29e8992d514c8ca436cfa74160aae0bda7"
	I1105 18:07:29.488846  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1105 18:07:29.488869  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:29.488880  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:29.488885  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:29.496809  335586 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1105 18:07:29.508059  335586 system_pods.go:59] 26 kube-system pods found
	I1105 18:07:29.508111  335586 system_pods.go:61] "coredns-7c65d6cfc9-2lr9d" [9dd129e6-b269-4247-9fcd-a1d83d4de3ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 18:07:29.508121  335586 system_pods.go:61] "coredns-7c65d6cfc9-mtrp9" [6c8c450e-1782-4152-98cd-7fc8865610c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 18:07:29.508128  335586 system_pods.go:61] "etcd-ha-256890" [ee67871a-90e7-4d85-a10a-309dd2616edf] Running
	I1105 18:07:29.508134  335586 system_pods.go:61] "etcd-ha-256890-m02" [f07aaa9a-a819-4978-a356-5bef70c8afac] Running
	I1105 18:07:29.508138  335586 system_pods.go:61] "etcd-ha-256890-m03" [38a0c265-5e88-4084-86a5-e35caa172439] Running
	I1105 18:07:29.508142  335586 system_pods.go:61] "kindnet-2wtgp" [f5fe806a-70e0-4960-8c08-7151f6d20903] Running
	I1105 18:07:29.508146  335586 system_pods.go:61] "kindnet-gbjp6" [1b6e7ccf-4bd0-4f43-b9be-ceee89958178] Running
	I1105 18:07:29.508156  335586 system_pods.go:61] "kindnet-qhrld" [0d32eade-996f-4ff4-9d32-a7e4f852794e] Running
	I1105 18:07:29.508160  335586 system_pods.go:61] "kindnet-xmj9b" [0e1c2dff-a586-4ead-bdc7-62d89e53fae9] Running
	I1105 18:07:29.508175  335586 system_pods.go:61] "kube-apiserver-ha-256890" [3c8b1887-5354-477a-a1e4-40b6123e7a9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 18:07:29.508180  335586 system_pods.go:61] "kube-apiserver-ha-256890-m02" [5df2c5c3-3e7b-4749-a0d5-fa53bda0c0cf] Running
	I1105 18:07:29.508184  335586 system_pods.go:61] "kube-apiserver-ha-256890-m03" [6c2892f1-9be7-4ce6-a064-687199ff68bc] Running
	I1105 18:07:29.508194  335586 system_pods.go:61] "kube-controller-manager-ha-256890" [1d36bcf7-9778-435b-bb43-7a9c9fa82f7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 18:07:29.508206  335586 system_pods.go:61] "kube-controller-manager-ha-256890-m02" [d97fc050-9549-421e-ab8b-d8c921c1fae1] Running
	I1105 18:07:29.508212  335586 system_pods.go:61] "kube-controller-manager-ha-256890-m03" [e59d6bda-b88b-453d-b7ee-1435753a4b94] Running
	I1105 18:07:29.508218  335586 system_pods.go:61] "kube-proxy-8wk8p" [4b477b09-f30c-4b04-bb4b-4d93352d67d1] Running
	I1105 18:07:29.508227  335586 system_pods.go:61] "kube-proxy-8xxrt" [b440b7e8-a9ea-46b2-aa4c-e328a4992dc9] Running
	I1105 18:07:29.508231  335586 system_pods.go:61] "kube-proxy-bvn86" [8704b8e9-7835-4867-a696-3721a0c45574] Running
	I1105 18:07:29.508238  335586 system_pods.go:61] "kube-proxy-fkfkc" [ec5c8310-bbce-42a1-92c1-7c40c05f665f] Running
	I1105 18:07:29.508242  335586 system_pods.go:61] "kube-scheduler-ha-256890" [8087e2e5-a98e-44e9-bc3f-3cef224c7d01] Running
	I1105 18:07:29.508247  335586 system_pods.go:61] "kube-scheduler-ha-256890-m02" [8e9f0100-82de-408a-8201-b51d4539c897] Running
	I1105 18:07:29.508251  335586 system_pods.go:61] "kube-scheduler-ha-256890-m03" [1ab8fe88-73cf-4ccd-a2cc-48d69b7579c0] Running
	I1105 18:07:29.508256  335586 system_pods.go:61] "kube-vip-ha-256890" [d6c49b64-a886-46b0-b4e4-74f7eea29bad] Running
	I1105 18:07:29.508260  335586 system_pods.go:61] "kube-vip-ha-256890-m02" [691ec814-3af7-4d47-8e41-b1b89f693733] Running
	I1105 18:07:29.508266  335586 system_pods.go:61] "kube-vip-ha-256890-m03" [ccaceab5-c1df-4f0f-8fe4-00cbde487c48] Running
	I1105 18:07:29.508270  335586 system_pods.go:61] "storage-provisioner" [7fc064a9-a337-41ae-af49-77dc1192a13d] Running
	I1105 18:07:29.508277  335586 system_pods.go:74] duration metric: took 4.132039586s to wait for pod list to return data ...
	I1105 18:07:29.508287  335586 default_sa.go:34] waiting for default service account to be created ...
	I1105 18:07:29.508653  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1105 18:07:29.508668  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:29.508724  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:29.508747  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:29.514632  335586 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:07:29.514894  335586 default_sa.go:45] found service account: "default"
	I1105 18:07:29.514907  335586 default_sa.go:55] duration metric: took 6.614531ms for default service account to be created ...
	I1105 18:07:29.514917  335586 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 18:07:29.514977  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1105 18:07:29.514981  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:29.514989  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:29.514993  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:29.520153  335586 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:07:29.529372  335586 system_pods.go:86] 26 kube-system pods found
	I1105 18:07:29.529412  335586 system_pods.go:89] "coredns-7c65d6cfc9-2lr9d" [9dd129e6-b269-4247-9fcd-a1d83d4de3ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 18:07:29.529423  335586 system_pods.go:89] "coredns-7c65d6cfc9-mtrp9" [6c8c450e-1782-4152-98cd-7fc8865610c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1105 18:07:29.529432  335586 system_pods.go:89] "etcd-ha-256890" [ee67871a-90e7-4d85-a10a-309dd2616edf] Running
	I1105 18:07:29.529438  335586 system_pods.go:89] "etcd-ha-256890-m02" [f07aaa9a-a819-4978-a356-5bef70c8afac] Running
	I1105 18:07:29.529442  335586 system_pods.go:89] "etcd-ha-256890-m03" [38a0c265-5e88-4084-86a5-e35caa172439] Running
	I1105 18:07:29.529446  335586 system_pods.go:89] "kindnet-2wtgp" [f5fe806a-70e0-4960-8c08-7151f6d20903] Running
	I1105 18:07:29.529451  335586 system_pods.go:89] "kindnet-gbjp6" [1b6e7ccf-4bd0-4f43-b9be-ceee89958178] Running
	I1105 18:07:29.529462  335586 system_pods.go:89] "kindnet-qhrld" [0d32eade-996f-4ff4-9d32-a7e4f852794e] Running
	I1105 18:07:29.529467  335586 system_pods.go:89] "kindnet-xmj9b" [0e1c2dff-a586-4ead-bdc7-62d89e53fae9] Running
	I1105 18:07:29.529483  335586 system_pods.go:89] "kube-apiserver-ha-256890" [3c8b1887-5354-477a-a1e4-40b6123e7a9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1105 18:07:29.529489  335586 system_pods.go:89] "kube-apiserver-ha-256890-m02" [5df2c5c3-3e7b-4749-a0d5-fa53bda0c0cf] Running
	I1105 18:07:29.529494  335586 system_pods.go:89] "kube-apiserver-ha-256890-m03" [6c2892f1-9be7-4ce6-a064-687199ff68bc] Running
	I1105 18:07:29.529508  335586 system_pods.go:89] "kube-controller-manager-ha-256890" [1d36bcf7-9778-435b-bb43-7a9c9fa82f7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1105 18:07:29.529513  335586 system_pods.go:89] "kube-controller-manager-ha-256890-m02" [d97fc050-9549-421e-ab8b-d8c921c1fae1] Running
	I1105 18:07:29.529521  335586 system_pods.go:89] "kube-controller-manager-ha-256890-m03" [e59d6bda-b88b-453d-b7ee-1435753a4b94] Running
	I1105 18:07:29.529526  335586 system_pods.go:89] "kube-proxy-8wk8p" [4b477b09-f30c-4b04-bb4b-4d93352d67d1] Running
	I1105 18:07:29.529530  335586 system_pods.go:89] "kube-proxy-8xxrt" [b440b7e8-a9ea-46b2-aa4c-e328a4992dc9] Running
	I1105 18:07:29.529535  335586 system_pods.go:89] "kube-proxy-bvn86" [8704b8e9-7835-4867-a696-3721a0c45574] Running
	I1105 18:07:29.529539  335586 system_pods.go:89] "kube-proxy-fkfkc" [ec5c8310-bbce-42a1-92c1-7c40c05f665f] Running
	I1105 18:07:29.529545  335586 system_pods.go:89] "kube-scheduler-ha-256890" [8087e2e5-a98e-44e9-bc3f-3cef224c7d01] Running
	I1105 18:07:29.529550  335586 system_pods.go:89] "kube-scheduler-ha-256890-m02" [8e9f0100-82de-408a-8201-b51d4539c897] Running
	I1105 18:07:29.529555  335586 system_pods.go:89] "kube-scheduler-ha-256890-m03" [1ab8fe88-73cf-4ccd-a2cc-48d69b7579c0] Running
	I1105 18:07:29.529560  335586 system_pods.go:89] "kube-vip-ha-256890" [d6c49b64-a886-46b0-b4e4-74f7eea29bad] Running
	I1105 18:07:29.529570  335586 system_pods.go:89] "kube-vip-ha-256890-m02" [691ec814-3af7-4d47-8e41-b1b89f693733] Running
	I1105 18:07:29.529574  335586 system_pods.go:89] "kube-vip-ha-256890-m03" [ccaceab5-c1df-4f0f-8fe4-00cbde487c48] Running
	I1105 18:07:29.529580  335586 system_pods.go:89] "storage-provisioner" [7fc064a9-a337-41ae-af49-77dc1192a13d] Running
	I1105 18:07:29.529589  335586 system_pods.go:126] duration metric: took 14.667344ms to wait for k8s-apps to be running ...
	I1105 18:07:29.529597  335586 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 18:07:29.529662  335586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:07:29.547160  335586 system_svc.go:56] duration metric: took 17.553049ms WaitForService to wait for kubelet
	I1105 18:07:29.547192  335586 kubeadm.go:582] duration metric: took 1m13.01522715s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:07:29.547212  335586 node_conditions.go:102] verifying NodePressure condition ...
	I1105 18:07:29.547295  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I1105 18:07:29.547307  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:29.547316  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:29.547322  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:29.555207  335586 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1105 18:07:29.557267  335586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1105 18:07:29.557299  335586 node_conditions.go:123] node cpu capacity is 2
	I1105 18:07:29.557311  335586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1105 18:07:29.557316  335586 node_conditions.go:123] node cpu capacity is 2
	I1105 18:07:29.557320  335586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1105 18:07:29.557325  335586 node_conditions.go:123] node cpu capacity is 2
	I1105 18:07:29.557328  335586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1105 18:07:29.557333  335586 node_conditions.go:123] node cpu capacity is 2
	I1105 18:07:29.557337  335586 node_conditions.go:105] duration metric: took 10.120046ms to run NodePressure ...
	I1105 18:07:29.557348  335586 start.go:241] waiting for startup goroutines ...
	I1105 18:07:29.557371  335586 start.go:255] writing updated cluster config ...
	I1105 18:07:29.560444  335586 out.go:201] 
	I1105 18:07:29.563488  335586 config.go:182] Loaded profile config "ha-256890": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:07:29.563612  335586 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/config.json ...
	I1105 18:07:29.567946  335586 out.go:177] * Starting "ha-256890-m03" control-plane node in "ha-256890" cluster
	I1105 18:07:29.570617  335586 cache.go:121] Beginning downloading kic base image for docker with crio
	I1105 18:07:29.573215  335586 out.go:177] * Pulling base image v0.0.45-1730282848-19883 ...
	I1105 18:07:29.575745  335586 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:07:29.575776  335586 cache.go:56] Caching tarball of preloaded images
	I1105 18:07:29.575821  335586 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local docker daemon
	I1105 18:07:29.575903  335586 preload.go:172] Found /home/jenkins/minikube-integration/19910-279806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1105 18:07:29.575914  335586 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:07:29.576056  335586 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/config.json ...
	I1105 18:07:29.594100  335586 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local docker daemon, skipping pull
	I1105 18:07:29.594123  335586 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 exists in daemon, skipping load
	I1105 18:07:29.594137  335586 cache.go:194] Successfully downloaded all kic artifacts
	I1105 18:07:29.594163  335586 start.go:360] acquireMachinesLock for ha-256890-m03: {Name:mk57910bd2657d1cdb5d131fac28c726b0d803fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:07:29.594215  335586 start.go:364] duration metric: took 33.362µs to acquireMachinesLock for "ha-256890-m03"
	I1105 18:07:29.594238  335586 start.go:96] Skipping create...Using existing machine configuration
	I1105 18:07:29.594248  335586 fix.go:54] fixHost starting: m03
	I1105 18:07:29.594499  335586 cli_runner.go:164] Run: docker container inspect ha-256890-m03 --format={{.State.Status}}
	I1105 18:07:29.614121  335586 fix.go:112] recreateIfNeeded on ha-256890-m03: state=Stopped err=<nil>
	W1105 18:07:29.614145  335586 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 18:07:29.617342  335586 out.go:177] * Restarting existing docker container for "ha-256890-m03" ...
	I1105 18:07:29.619796  335586 cli_runner.go:164] Run: docker start ha-256890-m03
	I1105 18:07:29.968071  335586 cli_runner.go:164] Run: docker container inspect ha-256890-m03 --format={{.State.Status}}
	I1105 18:07:29.990004  335586 kic.go:430] container "ha-256890-m03" state is running.
	I1105 18:07:29.990391  335586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-256890-m03
	I1105 18:07:30.014052  335586 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/config.json ...
	I1105 18:07:30.014339  335586 machine.go:93] provisionDockerMachine start ...
	I1105 18:07:30.014413  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m03
	I1105 18:07:30.043056  335586 main.go:141] libmachine: Using SSH client type: native
	I1105 18:07:30.043311  335586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33185 <nil> <nil>}
	I1105 18:07:30.043322  335586 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 18:07:30.044596  335586 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36602->127.0.0.1:33185: read: connection reset by peer
	I1105 18:07:33.231041  335586 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256890-m03
	
	I1105 18:07:33.231073  335586 ubuntu.go:169] provisioning hostname "ha-256890-m03"
	I1105 18:07:33.231145  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m03
	I1105 18:07:33.261506  335586 main.go:141] libmachine: Using SSH client type: native
	I1105 18:07:33.261740  335586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33185 <nil> <nil>}
	I1105 18:07:33.261752  335586 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256890-m03 && echo "ha-256890-m03" | sudo tee /etc/hostname
	I1105 18:07:33.495180  335586 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256890-m03
	
	I1105 18:07:33.495279  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m03
	I1105 18:07:33.524624  335586 main.go:141] libmachine: Using SSH client type: native
	I1105 18:07:33.524869  335586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33185 <nil> <nil>}
	I1105 18:07:33.524892  335586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256890-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256890-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256890-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:07:33.707205  335586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:07:33.707297  335586 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19910-279806/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-279806/.minikube}
	I1105 18:07:33.707328  335586 ubuntu.go:177] setting up certificates
	I1105 18:07:33.707370  335586 provision.go:84] configureAuth start
	I1105 18:07:33.707458  335586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-256890-m03
	I1105 18:07:33.732493  335586 provision.go:143] copyHostCerts
	I1105 18:07:33.732534  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19910-279806/.minikube/ca.pem
	I1105 18:07:33.732569  335586 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-279806/.minikube/ca.pem, removing ...
	I1105 18:07:33.732578  335586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.pem
	I1105 18:07:33.732700  335586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-279806/.minikube/ca.pem (1078 bytes)
	I1105 18:07:33.732782  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19910-279806/.minikube/cert.pem
	I1105 18:07:33.732800  335586 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-279806/.minikube/cert.pem, removing ...
	I1105 18:07:33.732804  335586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-279806/.minikube/cert.pem
	I1105 18:07:33.732832  335586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-279806/.minikube/cert.pem (1123 bytes)
	I1105 18:07:33.732871  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19910-279806/.minikube/key.pem
	I1105 18:07:33.732886  335586 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-279806/.minikube/key.pem, removing ...
	I1105 18:07:33.732890  335586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-279806/.minikube/key.pem
	I1105 18:07:33.732922  335586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-279806/.minikube/key.pem (1679 bytes)
	I1105 18:07:33.732971  335586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca-key.pem org=jenkins.ha-256890-m03 san=[127.0.0.1 192.168.49.4 ha-256890-m03 localhost minikube]
	I1105 18:07:34.259253  335586 provision.go:177] copyRemoteCerts
	I1105 18:07:34.259372  335586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:07:34.259449  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m03
	I1105 18:07:34.276632  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890-m03/id_rsa Username:docker}
	I1105 18:07:34.394829  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 18:07:34.394892  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 18:07:34.461017  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 18:07:34.461075  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1105 18:07:34.520647  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 18:07:34.520754  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 18:07:34.582629  335586 provision.go:87] duration metric: took 875.23216ms to configureAuth
	I1105 18:07:34.582698  335586 ubuntu.go:193] setting minikube options for container-runtime
	I1105 18:07:34.582956  335586 config.go:182] Loaded profile config "ha-256890": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:07:34.583104  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m03
	I1105 18:07:34.628763  335586 main.go:141] libmachine: Using SSH client type: native
	I1105 18:07:34.629013  335586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33185 <nil> <nil>}
	I1105 18:07:34.629030  335586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:07:35.668296  335586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:07:35.668382  335586 machine.go:96] duration metric: took 5.654031457s to provisionDockerMachine
	I1105 18:07:35.668410  335586 start.go:293] postStartSetup for "ha-256890-m03" (driver="docker")
	I1105 18:07:35.668449  335586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:07:35.668541  335586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:07:35.668704  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m03
	I1105 18:07:35.691870  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890-m03/id_rsa Username:docker}
	I1105 18:07:35.790420  335586 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:07:35.793522  335586 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1105 18:07:35.793555  335586 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1105 18:07:35.793566  335586 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1105 18:07:35.793572  335586 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1105 18:07:35.793582  335586 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-279806/.minikube/addons for local assets ...
	I1105 18:07:35.793640  335586 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-279806/.minikube/files for local assets ...
	I1105 18:07:35.793715  335586 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem -> 2851882.pem in /etc/ssl/certs
	I1105 18:07:35.793721  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem -> /etc/ssl/certs/2851882.pem
	I1105 18:07:35.793838  335586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:07:35.811086  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem --> /etc/ssl/certs/2851882.pem (1708 bytes)
	I1105 18:07:35.838119  335586 start.go:296] duration metric: took 169.667373ms for postStartSetup
	I1105 18:07:35.838204  335586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 18:07:35.838244  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m03
	I1105 18:07:35.858951  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890-m03/id_rsa Username:docker}
	I1105 18:07:35.946126  335586 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1105 18:07:35.950897  335586 fix.go:56] duration metric: took 6.356641543s for fixHost
	I1105 18:07:35.950920  335586 start.go:83] releasing machines lock for "ha-256890-m03", held for 6.356692028s
	I1105 18:07:35.950992  335586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-256890-m03
	I1105 18:07:35.970829  335586 out.go:177] * Found network options:
	I1105 18:07:35.973574  335586 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1105 18:07:35.976236  335586 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 18:07:35.976287  335586 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 18:07:35.976314  335586 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 18:07:35.976326  335586 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 18:07:35.976411  335586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:07:35.976453  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m03
	I1105 18:07:35.976481  335586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:07:35.976575  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m03
	I1105 18:07:35.997527  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890-m03/id_rsa Username:docker}
	I1105 18:07:36.004798  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33185 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890-m03/id_rsa Username:docker}
	I1105 18:07:36.448169  335586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 18:07:36.465706  335586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:07:36.484309  335586 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1105 18:07:36.484388  335586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:07:36.506219  335586 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1105 18:07:36.506292  335586 start.go:495] detecting cgroup driver to use...
	I1105 18:07:36.506340  335586 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1105 18:07:36.506419  335586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:07:36.537562  335586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:07:36.570456  335586 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:07:36.570522  335586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:07:36.600835  335586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:07:36.626897  335586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:07:36.822923  335586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:07:37.036350  335586 docker.go:233] disabling docker service ...
	I1105 18:07:37.036425  335586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:07:37.061417  335586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:07:37.103606  335586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:07:37.284480  335586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:07:37.483228  335586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:07:37.511908  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:07:37.563434  335586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:07:37.563510  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:07:37.591331  335586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:07:37.591455  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:07:37.615811  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:07:37.652371  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:07:37.683476  335586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:07:37.705982  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:07:37.725670  335586 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:07:37.754927  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:07:37.779234  335586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:07:37.800352  335586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:07:37.818884  335586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:07:37.985930  335586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:07:39.249617  335586 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.263597331s)
	I1105 18:07:39.249641  335586 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:07:39.249694  335586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:07:39.253429  335586 start.go:563] Will wait 60s for crictl version
	I1105 18:07:39.253497  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:07:39.256907  335586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:07:39.310713  335586 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1105 18:07:39.310801  335586 ssh_runner.go:195] Run: crio --version
	I1105 18:07:39.366371  335586 ssh_runner.go:195] Run: crio --version
	I1105 18:07:39.407715  335586 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1105 18:07:39.410186  335586 out.go:177]   - env NO_PROXY=192.168.49.2
	I1105 18:07:39.412729  335586 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1105 18:07:39.415486  335586 cli_runner.go:164] Run: docker network inspect ha-256890 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1105 18:07:39.432116  335586 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1105 18:07:39.435701  335586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:07:39.446409  335586 mustload.go:65] Loading cluster: ha-256890
	I1105 18:07:39.446649  335586 config.go:182] Loaded profile config "ha-256890": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:07:39.446895  335586 cli_runner.go:164] Run: docker container inspect ha-256890 --format={{.State.Status}}
	I1105 18:07:39.463851  335586 host.go:66] Checking if "ha-256890" exists ...
	I1105 18:07:39.464111  335586 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890 for IP: 192.168.49.4
	I1105 18:07:39.464127  335586 certs.go:194] generating shared ca certs ...
	I1105 18:07:39.464143  335586 certs.go:226] acquiring lock for ca certs: {Name:mk7e394808202081d7250bf8ad59a3f119279ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:07:39.464257  335586 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.key
	I1105 18:07:39.464302  335586 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.key
	I1105 18:07:39.464313  335586 certs.go:256] generating profile certs ...
	I1105 18:07:39.464388  335586 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/client.key
	I1105 18:07:39.464457  335586 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.key.5dba5964
	I1105 18:07:39.464502  335586 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/proxy-client.key
	I1105 18:07:39.464514  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 18:07:39.464528  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 18:07:39.464544  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 18:07:39.464555  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 18:07:39.464569  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1105 18:07:39.464584  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1105 18:07:39.464598  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1105 18:07:39.464679  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1105 18:07:39.464733  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/285188.pem (1338 bytes)
	W1105 18:07:39.464768  335586 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-279806/.minikube/certs/285188_empty.pem, impossibly tiny 0 bytes
	I1105 18:07:39.464780  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 18:07:39.464805  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem (1078 bytes)
	I1105 18:07:39.464833  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:07:39.464859  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem (1679 bytes)
	I1105 18:07:39.464901  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem (1708 bytes)
	I1105 18:07:39.464931  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:07:39.464952  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/285188.pem -> /usr/share/ca-certificates/285188.pem
	I1105 18:07:39.464966  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem -> /usr/share/ca-certificates/2851882.pem
	I1105 18:07:39.465023  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890
	I1105 18:07:39.482622  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890/id_rsa Username:docker}
	I1105 18:07:39.564926  335586 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1105 18:07:39.568595  335586 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1105 18:07:39.581138  335586 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1105 18:07:39.585254  335586 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1105 18:07:39.597101  335586 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1105 18:07:39.600229  335586 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1105 18:07:39.612654  335586 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1105 18:07:39.616121  335586 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1105 18:07:39.629165  335586 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1105 18:07:39.635170  335586 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1105 18:07:39.648034  335586 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1105 18:07:39.651485  335586 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1105 18:07:39.663394  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:07:39.687257  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 18:07:39.712379  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:07:39.738762  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 18:07:39.763865  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1105 18:07:39.790293  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 18:07:39.832193  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 18:07:39.860281  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1105 18:07:39.886221  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:07:39.910804  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/certs/285188.pem --> /usr/share/ca-certificates/285188.pem (1338 bytes)
	I1105 18:07:39.937130  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem --> /usr/share/ca-certificates/2851882.pem (1708 bytes)
	I1105 18:07:39.962765  335586 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1105 18:07:39.982002  335586 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1105 18:07:40.001129  335586 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1105 18:07:40.036857  335586 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1105 18:07:40.063880  335586 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1105 18:07:40.089881  335586 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1105 18:07:40.134741  335586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1105 18:07:40.174211  335586 ssh_runner.go:195] Run: openssl version
	I1105 18:07:40.190098  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:07:40.208128  335586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:07:40.213273  335586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:47 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:07:40.213436  335586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:07:40.221233  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:07:40.233420  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/285188.pem && ln -fs /usr/share/ca-certificates/285188.pem /etc/ssl/certs/285188.pem"
	I1105 18:07:40.245036  335586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/285188.pem
	I1105 18:07:40.248648  335586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:57 /usr/share/ca-certificates/285188.pem
	I1105 18:07:40.248758  335586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/285188.pem
	I1105 18:07:40.256246  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/285188.pem /etc/ssl/certs/51391683.0"
	I1105 18:07:40.266271  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2851882.pem && ln -fs /usr/share/ca-certificates/2851882.pem /etc/ssl/certs/2851882.pem"
	I1105 18:07:40.276236  335586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2851882.pem
	I1105 18:07:40.280056  335586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:57 /usr/share/ca-certificates/2851882.pem
	I1105 18:07:40.280167  335586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2851882.pem
	I1105 18:07:40.292148  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2851882.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:07:40.312269  335586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:07:40.317239  335586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1105 18:07:40.325915  335586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1105 18:07:40.332840  335586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1105 18:07:40.340109  335586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1105 18:07:40.347344  335586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1105 18:07:40.354545  335586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1105 18:07:40.361427  335586 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.31.2 crio true true} ...
	I1105 18:07:40.361553  335586 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-256890-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-256890 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:07:40.361606  335586 kube-vip.go:115] generating kube-vip config ...
	I1105 18:07:40.361663  335586 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1105 18:07:40.375543  335586 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1105 18:07:40.375653  335586 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1105 18:07:40.375737  335586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:07:40.385585  335586 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 18:07:40.385684  335586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1105 18:07:40.394639  335586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1105 18:07:40.415353  335586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:07:40.434474  335586 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1105 18:07:40.456072  335586 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1105 18:07:40.460063  335586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:07:40.471878  335586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:07:40.577206  335586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:07:40.589801  335586 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 18:07:40.589999  335586 config.go:182] Loaded profile config "ha-256890": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:07:40.593411  335586 out.go:177] * Verifying Kubernetes components...
	I1105 18:07:40.596081  335586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:07:40.701227  335586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:07:40.716713  335586 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19910-279806/kubeconfig
	I1105 18:07:40.717110  335586 kapi.go:59] client config for ha-256890: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/client.crt", KeyFile:"/home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/client.key", CAFile:"/home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e9d0d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1105 18:07:40.717204  335586 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1105 18:07:40.717479  335586 node_ready.go:35] waiting up to 6m0s for node "ha-256890-m03" to be "Ready" ...
	I1105 18:07:40.717575  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:07:40.717587  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:40.717596  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:40.717600  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:40.721094  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:07:41.218387  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:07:41.218410  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:41.218421  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:41.218427  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:41.221378  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:41.717744  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:07:41.717768  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:41.717778  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:41.717783  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:41.720363  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:42.217741  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:07:42.217763  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:42.217773  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:42.217778  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:42.220820  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:07:42.718518  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:07:42.718540  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:42.718550  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:42.718554  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:42.721475  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:42.722155  335586 node_ready.go:53] node "ha-256890-m03" has status "Ready":"Unknown"
	I1105 18:07:43.217858  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:07:43.217880  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:43.217891  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:43.217895  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:43.220863  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:43.718568  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:07:43.718590  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:43.718600  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:43.718604  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:43.721441  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:44.218631  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:07:44.218654  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:44.218665  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:44.218671  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:44.221994  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:07:44.222638  335586 node_ready.go:49] node "ha-256890-m03" has status "Ready":"True"
	I1105 18:07:44.222660  335586 node_ready.go:38] duration metric: took 3.505159811s for node "ha-256890-m03" to be "Ready" ...
	I1105 18:07:44.222672  335586 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:07:44.222743  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1105 18:07:44.222755  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:44.222763  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:44.222768  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:44.232587  335586 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1105 18:07:44.243965  335586 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2lr9d" in "kube-system" namespace to be "Ready" ...
	I1105 18:07:44.244098  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:44.244106  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:44.244115  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:44.244120  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:44.249858  335586 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:07:44.250865  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:44.250886  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:44.250896  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:44.250901  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:44.253667  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:44.744783  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:44.744804  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:44.744814  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:44.744819  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:44.747944  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:07:44.748842  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:44.748867  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:44.748880  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:44.748885  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:44.751693  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:45.244567  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:45.244589  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:45.244599  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:45.244628  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:45.247838  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:07:45.248657  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:45.248682  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:45.248692  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:45.248698  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:45.251644  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:45.744315  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:45.744337  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:45.744347  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:45.744351  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:45.747238  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:45.748043  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:45.748062  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:45.748071  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:45.748076  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:45.750784  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:46.244347  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:46.244372  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:46.244382  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:46.244386  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:46.247354  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:46.248071  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:46.248093  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:46.248103  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:46.248109  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:46.250847  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:46.251741  335586 pod_ready.go:103] pod "coredns-7c65d6cfc9-2lr9d" in "kube-system" namespace has status "Ready":"False"
	I1105 18:07:46.744691  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:46.744759  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:46.744793  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:46.744813  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:46.748023  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:07:46.748835  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:46.748889  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:46.748913  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:46.748932  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:46.751533  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:47.244247  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:47.244270  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:47.244281  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:47.244285  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:47.247324  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:07:47.248108  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:47.248131  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:47.248140  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:47.248145  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:47.250853  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:47.744787  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:47.744811  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:47.744820  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:47.744825  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:47.747744  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:47.748474  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:47.748493  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:47.748503  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:47.748506  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:47.751199  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:48.244492  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:48.244520  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:48.244530  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:48.244534  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:48.247549  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:48.248188  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:48.248199  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:48.248226  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:48.248233  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:48.250729  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:48.744771  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:48.744795  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:48.744803  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:48.744807  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:48.747701  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:48.748439  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:48.748457  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:48.748466  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:48.748472  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:48.751160  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:48.751736  335586 pod_ready.go:103] pod "coredns-7c65d6cfc9-2lr9d" in "kube-system" namespace has status "Ready":"False"
	I1105 18:07:49.244268  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:49.244298  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:49.244309  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:49.244341  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:49.247527  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:07:49.248372  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:49.248394  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:49.248405  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:49.248409  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:49.251207  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:49.744563  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:49.744588  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:49.744598  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:49.744624  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:49.747954  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:07:49.748852  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:49.748872  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:49.748889  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:49.748896  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:49.751632  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:50.244900  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:50.244923  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:50.244933  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:50.244938  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:50.249030  335586 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:07:50.250125  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:50.250141  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:50.250151  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:50.250154  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:50.253790  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:07:50.744557  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:50.744653  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:50.744678  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:50.744699  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:50.748176  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:07:50.749346  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:50.749362  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:50.749371  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:50.749376  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:50.752440  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:07:50.754227  335586 pod_ready.go:103] pod "coredns-7c65d6cfc9-2lr9d" in "kube-system" namespace has status "Ready":"False"
	I1105 18:07:51.244527  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:51.244547  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:51.244556  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:51.244560  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:51.252540  335586 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1105 18:07:51.253380  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:51.253420  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:51.253453  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:51.253472  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:51.256235  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:51.744529  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:51.744600  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:51.744677  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:51.744699  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:51.748929  335586 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:07:51.750327  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:51.750390  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:51.750415  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:51.750437  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:51.754318  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:07:52.244757  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:52.244826  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:52.244852  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:52.244874  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:52.252756  335586 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1105 18:07:52.254389  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:52.254455  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:52.254494  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:52.254518  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:52.260839  335586 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 18:07:52.744406  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:52.744423  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:52.744433  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:52.744439  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:52.747381  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:52.748188  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:52.748238  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:52.748261  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:52.748282  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:52.751381  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:07:53.244732  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:53.244795  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:53.244853  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:53.244864  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:53.270153  335586 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I1105 18:07:53.273386  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:53.273408  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:53.273419  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:53.273425  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:53.283612  335586 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1105 18:07:53.287351  335586 pod_ready.go:103] pod "coredns-7c65d6cfc9-2lr9d" in "kube-system" namespace has status "Ready":"False"
	I1105 18:07:53.745021  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:53.745093  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:53.745117  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:53.745139  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:53.747945  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:53.748871  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:53.748931  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:53.748955  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:53.748976  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:53.751466  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:54.244367  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:07:54.244441  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:54.244466  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:54.244486  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:54.257051  335586 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1105 18:07:54.258785  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:54.258849  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:54.258872  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:54.258894  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:54.272075  335586 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1105 18:07:54.272745  335586 pod_ready.go:98] node "ha-256890" hosting pod "coredns-7c65d6cfc9-2lr9d" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:07:54.272811  335586 pod_ready.go:82] duration metric: took 10.028804254s for pod "coredns-7c65d6cfc9-2lr9d" in "kube-system" namespace to be "Ready" ...
	E1105 18:07:54.272837  335586 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-256890" hosting pod "coredns-7c65d6cfc9-2lr9d" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:07:54.272861  335586 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mtrp9" in "kube-system" namespace to be "Ready" ...
	I1105 18:07:54.272952  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mtrp9
	I1105 18:07:54.272980  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:54.273011  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:54.273031  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:54.297757  335586 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I1105 18:07:54.298701  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:54.298755  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:54.298778  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:54.298798  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:54.314805  335586 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1105 18:07:54.338633  335586 pod_ready.go:98] node "ha-256890" hosting pod "coredns-7c65d6cfc9-mtrp9" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:07:54.338711  335586 pod_ready.go:82] duration metric: took 65.818602ms for pod "coredns-7c65d6cfc9-mtrp9" in "kube-system" namespace to be "Ready" ...
	E1105 18:07:54.338740  335586 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-256890" hosting pod "coredns-7c65d6cfc9-mtrp9" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:07:54.338761  335586 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-256890" in "kube-system" namespace to be "Ready" ...
	I1105 18:07:54.338854  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890
	I1105 18:07:54.338883  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:54.338906  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:54.338926  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:54.354495  335586 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1105 18:07:54.355104  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:07:54.355160  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:54.355184  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:54.355206  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:54.386054  335586 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I1105 18:07:54.386702  335586 pod_ready.go:98] node "ha-256890" hosting pod "etcd-ha-256890" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:07:54.386759  335586 pod_ready.go:82] duration metric: took 47.96637ms for pod "etcd-ha-256890" in "kube-system" namespace to be "Ready" ...
	E1105 18:07:54.386787  335586 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-256890" hosting pod "etcd-ha-256890" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:07:54.386809  335586 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:07:54.386896  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m02
	I1105 18:07:54.386921  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:54.386944  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:54.386964  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:54.404289  335586 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1105 18:07:54.405444  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:07:54.405509  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:54.405534  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:54.405557  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:54.422964  335586 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1105 18:07:54.423484  335586 pod_ready.go:93] pod "etcd-ha-256890-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:07:54.423538  335586 pod_ready.go:82] duration metric: took 36.704851ms for pod "etcd-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:07:54.423569  335586 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:07:54.423651  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:07:54.423680  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:54.423702  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:54.423723  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:54.434110  335586 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1105 18:07:54.435401  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:07:54.435457  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:54.435483  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:54.435503  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:54.448793  335586 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1105 18:07:54.924015  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:07:54.924086  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:54.924109  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:54.924130  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:54.927059  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:54.928173  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:07:54.928229  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:54.928253  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:54.928276  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:54.933027  335586 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:07:55.424230  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:07:55.424310  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:55.424335  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:55.424357  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:55.427605  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:07:55.428337  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:07:55.428387  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:55.428410  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:55.428431  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:55.431060  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:55.924590  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:07:55.924704  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:55.924729  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:55.924750  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:55.927419  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:55.928520  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:07:55.928576  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:55.928600  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:55.928645  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:55.931429  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:56.424106  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:07:56.424132  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:56.424142  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:56.424146  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:56.426910  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:56.427873  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:07:56.427896  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:56.427905  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:56.427910  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:56.430357  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:56.431375  335586 pod_ready.go:103] pod "etcd-ha-256890-m03" in "kube-system" namespace has status "Ready":"False"
	I1105 18:07:56.924694  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:07:56.924718  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:56.924728  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:56.924732  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:56.928841  335586 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:07:56.929921  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:07:56.929972  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:56.929993  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:56.929998  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:56.932427  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:57.423883  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:07:57.423909  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:57.423919  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:57.423922  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:57.426762  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:57.427539  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:07:57.427558  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:57.427567  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:57.427572  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:57.430188  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:57.923910  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:07:57.923931  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:57.923940  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:57.923944  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:57.926750  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:57.927536  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:07:57.927555  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:57.927564  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:57.927570  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:57.930958  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:07:58.423827  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:07:58.423849  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:58.423859  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:58.423864  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:58.426742  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:58.427405  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:07:58.427416  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:58.427424  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:58.427427  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:58.430083  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:58.924403  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:07:58.924425  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:58.924436  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:58.924441  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:58.927149  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:58.927926  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:07:58.927943  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:58.927953  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:58.927957  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:58.930339  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:58.931177  335586 pod_ready.go:103] pod "etcd-ha-256890-m03" in "kube-system" namespace has status "Ready":"False"
	I1105 18:07:59.424519  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:07:59.424543  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:59.424553  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:59.424559  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:59.427689  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:07:59.428459  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:07:59.428474  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:59.428484  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:59.428488  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:59.431407  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:59.923812  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:07:59.923837  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:59.923847  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:59.923853  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:59.926805  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:07:59.927579  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:07:59.927599  335586 round_trippers.go:469] Request Headers:
	I1105 18:07:59.927608  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:07:59.927611  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:07:59.929988  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:00.424395  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:08:00.424437  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:00.424447  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:00.424452  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:00.427305  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:00.428022  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:00.428040  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:00.428049  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:00.428054  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:00.430817  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:00.923873  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:08:00.923907  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:00.923917  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:00.923925  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:00.927223  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:00.927971  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:00.927992  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:00.928002  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:00.928006  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:00.931002  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:00.931671  335586 pod_ready.go:103] pod "etcd-ha-256890-m03" in "kube-system" namespace has status "Ready":"False"
	I1105 18:08:01.424464  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:08:01.424495  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:01.424506  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:01.424510  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:01.428052  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:01.428828  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:01.428871  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:01.428889  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:01.428896  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:01.431634  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:01.923844  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:08:01.923866  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:01.923875  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:01.923881  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:01.927773  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:01.928531  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:01.928580  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:01.928638  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:01.928659  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:01.931317  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:02.424582  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:08:02.424629  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:02.424640  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:02.424644  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:02.427559  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:02.428328  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:02.428349  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:02.428359  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:02.428365  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:02.430955  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:02.924051  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:08:02.924074  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:02.924083  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:02.924087  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:02.926978  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:02.927907  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:02.927923  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:02.927933  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:02.927938  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:02.930636  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:03.423812  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:08:03.423836  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:03.423847  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:03.423852  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:03.426871  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:03.427674  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:03.427692  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:03.427700  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:03.427703  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:03.430209  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:03.430712  335586 pod_ready.go:103] pod "etcd-ha-256890-m03" in "kube-system" namespace has status "Ready":"False"
	I1105 18:08:03.924475  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:08:03.924495  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:03.924509  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:03.924514  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:03.927297  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:03.928229  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:03.928249  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:03.928259  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:03.928264  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:03.930720  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:04.424539  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:08:04.424563  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:04.424574  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:04.424580  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:04.427383  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:04.428078  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:04.428099  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:04.428109  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:04.428114  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:04.430309  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:04.924271  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:08:04.924296  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:04.924308  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:04.924313  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:04.927575  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:04.928687  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:04.928712  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:04.928723  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:04.928726  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:04.931388  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:05.423780  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:08:05.423805  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:05.423815  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:05.423820  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:05.426637  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:05.427294  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:05.427306  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:05.427314  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:05.427319  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:05.430023  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:05.430860  335586 pod_ready.go:103] pod "etcd-ha-256890-m03" in "kube-system" namespace has status "Ready":"False"
	I1105 18:08:05.924325  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:08:05.924349  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:05.924358  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:05.924362  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:05.927442  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:05.928493  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:05.928513  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:05.928523  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:05.928527  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:05.931089  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:06.423939  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:08:06.423963  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:06.423979  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:06.423985  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:06.443570  335586 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I1105 18:08:06.444595  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:06.444628  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:06.444637  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:06.444642  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:06.448871  335586 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:08:06.924539  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:08:06.924562  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:06.924572  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:06.924577  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:06.927421  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:06.928188  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:06.928208  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:06.928218  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:06.928223  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:06.931301  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:07.424270  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:08:07.424300  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:07.424310  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:07.424316  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:07.427145  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:07.427776  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:07.427786  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:07.427796  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:07.427801  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:07.430534  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:07.431039  335586 pod_ready.go:103] pod "etcd-ha-256890-m03" in "kube-system" namespace has status "Ready":"False"
	I1105 18:08:07.923964  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:08:07.923985  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:07.923995  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:07.923999  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:07.926978  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:07.927892  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:07.927912  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:07.927922  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:07.927927  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:07.930468  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:08.424741  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:08:08.424765  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:08.424775  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:08.424781  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:08.427737  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:08.428668  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:08.428686  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:08.428696  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:08.428700  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:08.431535  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:08.432291  335586 pod_ready.go:93] pod "etcd-ha-256890-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:08:08.432312  335586 pod_ready.go:82] duration metric: took 14.008722546s for pod "etcd-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:08.432344  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-256890" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:08.432418  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256890
	I1105 18:08:08.432428  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:08.432437  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:08.432443  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:08.435079  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:08.435904  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:08:08.435921  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:08.435931  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:08.435937  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:08.438502  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:08.439218  335586 pod_ready.go:98] node "ha-256890" hosting pod "kube-apiserver-ha-256890" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:08.439244  335586 pod_ready.go:82] duration metric: took 6.888023ms for pod "kube-apiserver-ha-256890" in "kube-system" namespace to be "Ready" ...
	E1105 18:08:08.439256  335586 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-256890" hosting pod "kube-apiserver-ha-256890" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:08.439284  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:08.439379  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256890-m02
	I1105 18:08:08.439391  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:08.439400  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:08.439410  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:08.442141  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:08.443114  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:08:08.443135  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:08.443145  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:08.443149  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:08.445795  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:08.446498  335586 pod_ready.go:93] pod "kube-apiserver-ha-256890-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:08:08.446520  335586 pod_ready.go:82] duration metric: took 7.219782ms for pod "kube-apiserver-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:08.446533  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:08.446603  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256890-m03
	I1105 18:08:08.446614  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:08.446622  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:08.446627  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:08.449296  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:08.450077  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:08.450097  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:08.450106  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:08.450111  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:08.452573  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:08.453135  335586 pod_ready.go:93] pod "kube-apiserver-ha-256890-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:08:08.453154  335586 pod_ready.go:82] duration metric: took 6.613315ms for pod "kube-apiserver-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:08.453167  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-256890" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:08.453234  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256890
	I1105 18:08:08.453246  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:08.453252  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:08.453256  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:08.455823  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:08.456580  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:08:08.456599  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:08.456656  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:08.456669  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:08.459193  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:08.459775  335586 pod_ready.go:98] node "ha-256890" hosting pod "kube-controller-manager-ha-256890" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:08.459803  335586 pod_ready.go:82] duration metric: took 6.626402ms for pod "kube-controller-manager-ha-256890" in "kube-system" namespace to be "Ready" ...
	E1105 18:08:08.459815  335586 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-256890" hosting pod "kube-controller-manager-ha-256890" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:08.459822  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:08.625110  335586 request.go:632] Waited for 165.197468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256890-m02
	I1105 18:08:08.625176  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256890-m02
	I1105 18:08:08.625188  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:08.625197  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:08.625201  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:08.628380  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:08.825469  335586 request.go:632] Waited for 196.353784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:08:08.825581  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:08:08.825595  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:08.825604  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:08.825608  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:08.828417  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:08.829017  335586 pod_ready.go:93] pod "kube-controller-manager-ha-256890-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:08:08.829036  335586 pod_ready.go:82] duration metric: took 369.203569ms for pod "kube-controller-manager-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:08.829069  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:09.024798  335586 request.go:632] Waited for 195.657977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256890-m03
	I1105 18:08:09.024888  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256890-m03
	I1105 18:08:09.024901  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:09.024910  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:09.024915  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:09.027931  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:09.224990  335586 request.go:632] Waited for 196.241519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:09.225054  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:09.225065  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:09.225074  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:09.225082  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:09.228396  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:09.229021  335586 pod_ready.go:93] pod "kube-controller-manager-ha-256890-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:08:09.229051  335586 pod_ready.go:82] duration metric: took 399.960243ms for pod "kube-controller-manager-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:09.229082  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8wk8p" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:09.424815  335586 request.go:632] Waited for 195.635511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8wk8p
	I1105 18:08:09.424879  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8wk8p
	I1105 18:08:09.424885  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:09.424893  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:09.424898  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:09.427902  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:09.624818  335586 request.go:632] Waited for 196.26067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:08:09.624899  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:08:09.624906  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:09.624914  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:09.624926  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:09.627656  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:09.628522  335586 pod_ready.go:98] node "ha-256890" hosting pod "kube-proxy-8wk8p" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:09.628548  335586 pod_ready.go:82] duration metric: took 399.431903ms for pod "kube-proxy-8wk8p" in "kube-system" namespace to be "Ready" ...
	E1105 18:08:09.628559  335586 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-256890" hosting pod "kube-proxy-8wk8p" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:09.628567  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8xxrt" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:09.825489  335586 request.go:632] Waited for 196.853878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xxrt
	I1105 18:08:09.825646  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xxrt
	I1105 18:08:09.825657  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:09.825667  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:09.825672  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:09.828489  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:10.025355  335586 request.go:632] Waited for 196.249421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:10.025413  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:10.025419  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:10.025436  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:10.025441  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:10.035037  335586 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1105 18:08:10.036234  335586 pod_ready.go:93] pod "kube-proxy-8xxrt" in "kube-system" namespace has status "Ready":"True"
	I1105 18:08:10.036258  335586 pod_ready.go:82] duration metric: took 407.682663ms for pod "kube-proxy-8xxrt" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:10.036271  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bvn86" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:10.225634  335586 request.go:632] Waited for 189.292792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvn86
	I1105 18:08:10.225713  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvn86
	I1105 18:08:10.225718  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:10.225733  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:10.225738  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:10.228569  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:10.425781  335586 request.go:632] Waited for 196.328796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:10.425838  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:10.425844  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:10.425852  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:10.425859  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:10.428700  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:10.429317  335586 pod_ready.go:98] node "ha-256890-m04" hosting pod "kube-proxy-bvn86" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890-m04" has status "Ready":"Unknown"
	I1105 18:08:10.429343  335586 pod_ready.go:82] duration metric: took 393.065389ms for pod "kube-proxy-bvn86" in "kube-system" namespace to be "Ready" ...
	E1105 18:08:10.429354  335586 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-256890-m04" hosting pod "kube-proxy-bvn86" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890-m04" has status "Ready":"Unknown"
	I1105 18:08:10.429362  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fkfkc" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:10.625260  335586 request.go:632] Waited for 195.825435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkfkc
	I1105 18:08:10.625330  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkfkc
	I1105 18:08:10.625343  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:10.625360  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:10.625368  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:10.628319  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:10.825624  335586 request.go:632] Waited for 196.340251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:08:10.825681  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:08:10.825692  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:10.825700  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:10.825709  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:10.828579  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:10.829181  335586 pod_ready.go:93] pod "kube-proxy-fkfkc" in "kube-system" namespace has status "Ready":"True"
	I1105 18:08:10.829204  335586 pod_ready.go:82] duration metric: took 399.829907ms for pod "kube-proxy-fkfkc" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:10.829225  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-256890" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:11.025134  335586 request.go:632] Waited for 195.835961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256890
	I1105 18:08:11.025236  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256890
	I1105 18:08:11.025266  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:11.025293  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:11.025314  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:11.028132  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:11.225293  335586 request.go:632] Waited for 196.343728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:08:11.225348  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:08:11.225355  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:11.225364  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:11.225375  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:11.228305  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:11.229046  335586 pod_ready.go:98] node "ha-256890" hosting pod "kube-scheduler-ha-256890" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:11.229078  335586 pod_ready.go:82] duration metric: took 399.84109ms for pod "kube-scheduler-ha-256890" in "kube-system" namespace to be "Ready" ...
	E1105 18:08:11.229089  335586 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-256890" hosting pod "kube-scheduler-ha-256890" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:11.229097  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:11.424827  335586 request.go:632] Waited for 195.653593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256890-m02
	I1105 18:08:11.424911  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256890-m02
	I1105 18:08:11.424925  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:11.424935  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:11.424943  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:11.428461  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:11.625410  335586 request.go:632] Waited for 196.286497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:08:11.625518  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:08:11.625581  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:11.625610  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:11.625630  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:11.628535  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:11.629175  335586 pod_ready.go:93] pod "kube-scheduler-ha-256890-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:08:11.629221  335586 pod_ready.go:82] duration metric: took 400.106328ms for pod "kube-scheduler-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:11.629239  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:11.825632  335586 request.go:632] Waited for 196.296885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256890-m03
	I1105 18:08:11.825691  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256890-m03
	I1105 18:08:11.825697  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:11.825706  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:11.825716  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:11.828542  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:12.025575  335586 request.go:632] Waited for 196.355354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:12.025667  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:12.025677  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:12.025686  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:12.025691  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:12.028750  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:12.029480  335586 pod_ready.go:93] pod "kube-scheduler-ha-256890-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:08:12.029502  335586 pod_ready.go:82] duration metric: took 400.254834ms for pod "kube-scheduler-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:12.029535  335586 pod_ready.go:39] duration metric: took 27.806831267s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:08:12.029559  335586 api_server.go:52] waiting for apiserver process to appear ...
	I1105 18:08:12.029650  335586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:08:12.054985  335586 api_server.go:72] duration metric: took 31.465089413s to wait for apiserver process to appear ...
	I1105 18:08:12.055054  335586 api_server.go:88] waiting for apiserver healthz status ...
	I1105 18:08:12.055083  335586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 18:08:12.066344  335586 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1105 18:08:12.066502  335586 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I1105 18:08:12.066517  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:12.066530  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:12.066540  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:12.067622  335586 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1105 18:08:12.067964  335586 api_server.go:141] control plane version: v1.31.2
	I1105 18:08:12.067994  335586 api_server.go:131] duration metric: took 12.922218ms to wait for apiserver health ...
	I1105 18:08:12.068006  335586 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 18:08:12.225405  335586 request.go:632] Waited for 157.306164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1105 18:08:12.225460  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1105 18:08:12.225472  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:12.225482  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:12.225492  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:12.231459  335586 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:08:12.241422  335586 system_pods.go:59] 26 kube-system pods found
	I1105 18:08:12.242711  335586 system_pods.go:61] "coredns-7c65d6cfc9-2lr9d" [9dd129e6-b269-4247-9fcd-a1d83d4de3ed] Running
	I1105 18:08:12.242738  335586 system_pods.go:61] "coredns-7c65d6cfc9-mtrp9" [6c8c450e-1782-4152-98cd-7fc8865610c1] Running
	I1105 18:08:12.242775  335586 system_pods.go:61] "etcd-ha-256890" [ee67871a-90e7-4d85-a10a-309dd2616edf] Running
	I1105 18:08:12.242807  335586 system_pods.go:61] "etcd-ha-256890-m02" [f07aaa9a-a819-4978-a356-5bef70c8afac] Running
	I1105 18:08:12.242832  335586 system_pods.go:61] "etcd-ha-256890-m03" [38a0c265-5e88-4084-86a5-e35caa172439] Running
	I1105 18:08:12.242854  335586 system_pods.go:61] "kindnet-2wtgp" [f5fe806a-70e0-4960-8c08-7151f6d20903] Running
	I1105 18:08:12.242886  335586 system_pods.go:61] "kindnet-gbjp6" [1b6e7ccf-4bd0-4f43-b9be-ceee89958178] Running
	I1105 18:08:12.242913  335586 system_pods.go:61] "kindnet-qhrld" [0d32eade-996f-4ff4-9d32-a7e4f852794e] Running
	I1105 18:08:12.242934  335586 system_pods.go:61] "kindnet-xmj9b" [0e1c2dff-a586-4ead-bdc7-62d89e53fae9] Running
	I1105 18:08:12.242954  335586 system_pods.go:61] "kube-apiserver-ha-256890" [3c8b1887-5354-477a-a1e4-40b6123e7a9f] Running
	I1105 18:08:12.242990  335586 system_pods.go:61] "kube-apiserver-ha-256890-m02" [5df2c5c3-3e7b-4749-a0d5-fa53bda0c0cf] Running
	I1105 18:08:12.243014  335586 system_pods.go:61] "kube-apiserver-ha-256890-m03" [6c2892f1-9be7-4ce6-a064-687199ff68bc] Running
	I1105 18:08:12.243034  335586 system_pods.go:61] "kube-controller-manager-ha-256890" [1d36bcf7-9778-435b-bb43-7a9c9fa82f7d] Running
	I1105 18:08:12.243054  335586 system_pods.go:61] "kube-controller-manager-ha-256890-m02" [d97fc050-9549-421e-ab8b-d8c921c1fae1] Running
	I1105 18:08:12.243099  335586 system_pods.go:61] "kube-controller-manager-ha-256890-m03" [e59d6bda-b88b-453d-b7ee-1435753a4b94] Running
	I1105 18:08:12.243121  335586 system_pods.go:61] "kube-proxy-8wk8p" [4b477b09-f30c-4b04-bb4b-4d93352d67d1] Running
	I1105 18:08:12.243140  335586 system_pods.go:61] "kube-proxy-8xxrt" [b440b7e8-a9ea-46b2-aa4c-e328a4992dc9] Running
	I1105 18:08:12.243160  335586 system_pods.go:61] "kube-proxy-bvn86" [8704b8e9-7835-4867-a696-3721a0c45574] Running
	I1105 18:08:12.243179  335586 system_pods.go:61] "kube-proxy-fkfkc" [ec5c8310-bbce-42a1-92c1-7c40c05f665f] Running
	I1105 18:08:12.243207  335586 system_pods.go:61] "kube-scheduler-ha-256890" [8087e2e5-a98e-44e9-bc3f-3cef224c7d01] Running
	I1105 18:08:12.243230  335586 system_pods.go:61] "kube-scheduler-ha-256890-m02" [8e9f0100-82de-408a-8201-b51d4539c897] Running
	I1105 18:08:12.243250  335586 system_pods.go:61] "kube-scheduler-ha-256890-m03" [1ab8fe88-73cf-4ccd-a2cc-48d69b7579c0] Running
	I1105 18:08:12.243272  335586 system_pods.go:61] "kube-vip-ha-256890" [d6c49b64-a886-46b0-b4e4-74f7eea29bad] Running
	I1105 18:08:12.243292  335586 system_pods.go:61] "kube-vip-ha-256890-m02" [691ec814-3af7-4d47-8e41-b1b89f693733] Running
	I1105 18:08:12.243323  335586 system_pods.go:61] "kube-vip-ha-256890-m03" [ccaceab5-c1df-4f0f-8fe4-00cbde487c48] Running
	I1105 18:08:12.243347  335586 system_pods.go:61] "storage-provisioner" [7fc064a9-a337-41ae-af49-77dc1192a13d] Running
	I1105 18:08:12.243369  335586 system_pods.go:74] duration metric: took 175.356301ms to wait for pod list to return data ...
	I1105 18:08:12.243391  335586 default_sa.go:34] waiting for default service account to be created ...
	I1105 18:08:12.425766  335586 request.go:632] Waited for 182.266855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1105 18:08:12.425827  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1105 18:08:12.425837  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:12.425846  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:12.425850  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:12.428841  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:12.429001  335586 default_sa.go:45] found service account: "default"
	I1105 18:08:12.429020  335586 default_sa.go:55] duration metric: took 185.60929ms for default service account to be created ...
	I1105 18:08:12.429030  335586 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 18:08:12.625333  335586 request.go:632] Waited for 196.222955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1105 18:08:12.625398  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1105 18:08:12.625408  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:12.625417  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:12.625421  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:12.630968  335586 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:08:12.640412  335586 system_pods.go:86] 26 kube-system pods found
	I1105 18:08:12.640452  335586 system_pods.go:89] "coredns-7c65d6cfc9-2lr9d" [9dd129e6-b269-4247-9fcd-a1d83d4de3ed] Running
	I1105 18:08:12.640460  335586 system_pods.go:89] "coredns-7c65d6cfc9-mtrp9" [6c8c450e-1782-4152-98cd-7fc8865610c1] Running
	I1105 18:08:12.640465  335586 system_pods.go:89] "etcd-ha-256890" [ee67871a-90e7-4d85-a10a-309dd2616edf] Running
	I1105 18:08:12.640470  335586 system_pods.go:89] "etcd-ha-256890-m02" [f07aaa9a-a819-4978-a356-5bef70c8afac] Running
	I1105 18:08:12.640500  335586 system_pods.go:89] "etcd-ha-256890-m03" [38a0c265-5e88-4084-86a5-e35caa172439] Running
	I1105 18:08:12.640511  335586 system_pods.go:89] "kindnet-2wtgp" [f5fe806a-70e0-4960-8c08-7151f6d20903] Running
	I1105 18:08:12.640516  335586 system_pods.go:89] "kindnet-gbjp6" [1b6e7ccf-4bd0-4f43-b9be-ceee89958178] Running
	I1105 18:08:12.640520  335586 system_pods.go:89] "kindnet-qhrld" [0d32eade-996f-4ff4-9d32-a7e4f852794e] Running
	I1105 18:08:12.640525  335586 system_pods.go:89] "kindnet-xmj9b" [0e1c2dff-a586-4ead-bdc7-62d89e53fae9] Running
	I1105 18:08:12.640532  335586 system_pods.go:89] "kube-apiserver-ha-256890" [3c8b1887-5354-477a-a1e4-40b6123e7a9f] Running
	I1105 18:08:12.640537  335586 system_pods.go:89] "kube-apiserver-ha-256890-m02" [5df2c5c3-3e7b-4749-a0d5-fa53bda0c0cf] Running
	I1105 18:08:12.640544  335586 system_pods.go:89] "kube-apiserver-ha-256890-m03" [6c2892f1-9be7-4ce6-a064-687199ff68bc] Running
	I1105 18:08:12.640549  335586 system_pods.go:89] "kube-controller-manager-ha-256890" [1d36bcf7-9778-435b-bb43-7a9c9fa82f7d] Running
	I1105 18:08:12.640554  335586 system_pods.go:89] "kube-controller-manager-ha-256890-m02" [d97fc050-9549-421e-ab8b-d8c921c1fae1] Running
	I1105 18:08:12.640694  335586 system_pods.go:89] "kube-controller-manager-ha-256890-m03" [e59d6bda-b88b-453d-b7ee-1435753a4b94] Running
	I1105 18:08:12.640710  335586 system_pods.go:89] "kube-proxy-8wk8p" [4b477b09-f30c-4b04-bb4b-4d93352d67d1] Running
	I1105 18:08:12.640716  335586 system_pods.go:89] "kube-proxy-8xxrt" [b440b7e8-a9ea-46b2-aa4c-e328a4992dc9] Running
	I1105 18:08:12.640721  335586 system_pods.go:89] "kube-proxy-bvn86" [8704b8e9-7835-4867-a696-3721a0c45574] Running
	I1105 18:08:12.640746  335586 system_pods.go:89] "kube-proxy-fkfkc" [ec5c8310-bbce-42a1-92c1-7c40c05f665f] Running
	I1105 18:08:12.640750  335586 system_pods.go:89] "kube-scheduler-ha-256890" [8087e2e5-a98e-44e9-bc3f-3cef224c7d01] Running
	I1105 18:08:12.640755  335586 system_pods.go:89] "kube-scheduler-ha-256890-m02" [8e9f0100-82de-408a-8201-b51d4539c897] Running
	I1105 18:08:12.640762  335586 system_pods.go:89] "kube-scheduler-ha-256890-m03" [1ab8fe88-73cf-4ccd-a2cc-48d69b7579c0] Running
	I1105 18:08:12.640779  335586 system_pods.go:89] "kube-vip-ha-256890" [d6c49b64-a886-46b0-b4e4-74f7eea29bad] Running
	I1105 18:08:12.640788  335586 system_pods.go:89] "kube-vip-ha-256890-m02" [691ec814-3af7-4d47-8e41-b1b89f693733] Running
	I1105 18:08:12.640792  335586 system_pods.go:89] "kube-vip-ha-256890-m03" [ccaceab5-c1df-4f0f-8fe4-00cbde487c48] Running
	I1105 18:08:12.640796  335586 system_pods.go:89] "storage-provisioner" [7fc064a9-a337-41ae-af49-77dc1192a13d] Running
	I1105 18:08:12.640825  335586 system_pods.go:126] duration metric: took 211.786355ms to wait for k8s-apps to be running ...
	I1105 18:08:12.640839  335586 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 18:08:12.640908  335586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:08:12.652966  335586 system_svc.go:56] duration metric: took 12.117965ms WaitForService to wait for kubelet
	I1105 18:08:12.652996  335586 kubeadm.go:582] duration metric: took 32.063104344s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:08:12.653046  335586 node_conditions.go:102] verifying NodePressure condition ...
	I1105 18:08:12.825602  335586 request.go:632] Waited for 172.448697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1105 18:08:12.825688  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I1105 18:08:12.825703  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:12.825712  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:12.825718  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:12.828897  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:12.830785  335586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1105 18:08:12.830815  335586 node_conditions.go:123] node cpu capacity is 2
	I1105 18:08:12.830827  335586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1105 18:08:12.830831  335586 node_conditions.go:123] node cpu capacity is 2
	I1105 18:08:12.830855  335586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1105 18:08:12.830864  335586 node_conditions.go:123] node cpu capacity is 2
	I1105 18:08:12.830869  335586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1105 18:08:12.830874  335586 node_conditions.go:123] node cpu capacity is 2
	I1105 18:08:12.830879  335586 node_conditions.go:105] duration metric: took 177.811898ms to run NodePressure ...
	I1105 18:08:12.830893  335586 start.go:241] waiting for startup goroutines ...
	I1105 18:08:12.830924  335586 start.go:255] writing updated cluster config ...
	I1105 18:08:12.834086  335586 out.go:201] 
	I1105 18:08:12.837735  335586 config.go:182] Loaded profile config "ha-256890": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:08:12.837884  335586 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/config.json ...
	I1105 18:08:12.841241  335586 out.go:177] * Starting "ha-256890-m04" worker node in "ha-256890" cluster
	I1105 18:08:12.843823  335586 cache.go:121] Beginning downloading kic base image for docker with crio
	I1105 18:08:12.846797  335586 out.go:177] * Pulling base image v0.0.45-1730282848-19883 ...
	I1105 18:08:12.849947  335586 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 18:08:12.849978  335586 cache.go:56] Caching tarball of preloaded images
	I1105 18:08:12.850033  335586 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local docker daemon
	I1105 18:08:12.850119  335586 preload.go:172] Found /home/jenkins/minikube-integration/19910-279806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1105 18:08:12.850131  335586 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 18:08:12.850269  335586 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/config.json ...
	I1105 18:08:12.871474  335586 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local docker daemon, skipping pull
	I1105 18:08:12.871497  335586 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 exists in daemon, skipping load
	I1105 18:08:12.871511  335586 cache.go:194] Successfully downloaded all kic artifacts
	I1105 18:08:12.871536  335586 start.go:360] acquireMachinesLock for ha-256890-m04: {Name:mkc98a8f0f306f360252346dd68adf97a675088e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 18:08:12.871598  335586 start.go:364] duration metric: took 37.769µs to acquireMachinesLock for "ha-256890-m04"
	I1105 18:08:12.871622  335586 start.go:96] Skipping create...Using existing machine configuration
	I1105 18:08:12.871628  335586 fix.go:54] fixHost starting: m04
	I1105 18:08:12.871871  335586 cli_runner.go:164] Run: docker container inspect ha-256890-m04 --format={{.State.Status}}
	I1105 18:08:12.888677  335586 fix.go:112] recreateIfNeeded on ha-256890-m04: state=Stopped err=<nil>
	W1105 18:08:12.888710  335586 fix.go:138] unexpected machine state, will restart: <nil>
	I1105 18:08:12.891562  335586 out.go:177] * Restarting existing docker container for "ha-256890-m04" ...
	I1105 18:08:12.894242  335586 cli_runner.go:164] Run: docker start ha-256890-m04
	I1105 18:08:13.231732  335586 cli_runner.go:164] Run: docker container inspect ha-256890-m04 --format={{.State.Status}}
	I1105 18:08:13.253208  335586 kic.go:430] container "ha-256890-m04" state is running.
	I1105 18:08:13.253566  335586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-256890-m04
	I1105 18:08:13.278604  335586 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/config.json ...
	I1105 18:08:13.278844  335586 machine.go:93] provisionDockerMachine start ...
	I1105 18:08:13.278901  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m04
	I1105 18:08:13.304389  335586 main.go:141] libmachine: Using SSH client type: native
	I1105 18:08:13.304758  335586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33190 <nil> <nil>}
	I1105 18:08:13.304773  335586 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 18:08:13.305693  335586 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1105 18:08:16.440589  335586 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256890-m04
	
	I1105 18:08:16.440643  335586 ubuntu.go:169] provisioning hostname "ha-256890-m04"
	I1105 18:08:16.440719  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m04
	I1105 18:08:16.458775  335586 main.go:141] libmachine: Using SSH client type: native
	I1105 18:08:16.459059  335586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33190 <nil> <nil>}
	I1105 18:08:16.459070  335586 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-256890-m04 && echo "ha-256890-m04" | sudo tee /etc/hostname
	I1105 18:08:16.618885  335586 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-256890-m04
	
	I1105 18:08:16.618968  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m04
	I1105 18:08:16.640194  335586 main.go:141] libmachine: Using SSH client type: native
	I1105 18:08:16.640459  335586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33190 <nil> <nil>}
	I1105 18:08:16.640486  335586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-256890-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-256890-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-256890-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 18:08:16.778170  335586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 18:08:16.778199  335586 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19910-279806/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-279806/.minikube}
	I1105 18:08:16.778214  335586 ubuntu.go:177] setting up certificates
	I1105 18:08:16.778223  335586 provision.go:84] configureAuth start
	I1105 18:08:16.778288  335586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-256890-m04
	I1105 18:08:16.798277  335586 provision.go:143] copyHostCerts
	I1105 18:08:16.798319  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19910-279806/.minikube/key.pem
	I1105 18:08:16.798351  335586 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-279806/.minikube/key.pem, removing ...
	I1105 18:08:16.798365  335586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-279806/.minikube/key.pem
	I1105 18:08:16.798444  335586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-279806/.minikube/key.pem (1679 bytes)
	I1105 18:08:16.798529  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19910-279806/.minikube/ca.pem
	I1105 18:08:16.798552  335586 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-279806/.minikube/ca.pem, removing ...
	I1105 18:08:16.798556  335586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.pem
	I1105 18:08:16.798586  335586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-279806/.minikube/ca.pem (1078 bytes)
	I1105 18:08:16.798627  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19910-279806/.minikube/cert.pem
	I1105 18:08:16.798649  335586 exec_runner.go:144] found /home/jenkins/minikube-integration/19910-279806/.minikube/cert.pem, removing ...
	I1105 18:08:16.798653  335586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19910-279806/.minikube/cert.pem
	I1105 18:08:16.798680  335586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-279806/.minikube/cert.pem (1123 bytes)
	I1105 18:08:16.798728  335586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca-key.pem org=jenkins.ha-256890-m04 san=[127.0.0.1 192.168.49.5 ha-256890-m04 localhost minikube]
	I1105 18:08:17.367395  335586 provision.go:177] copyRemoteCerts
	I1105 18:08:17.367550  335586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 18:08:17.367614  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m04
	I1105 18:08:17.396402  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890-m04/id_rsa Username:docker}
	I1105 18:08:17.493040  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1105 18:08:17.493114  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1105 18:08:17.524923  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1105 18:08:17.524983  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 18:08:17.554280  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1105 18:08:17.554387  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 18:08:17.583093  335586 provision.go:87] duration metric: took 804.855637ms to configureAuth
	I1105 18:08:17.583131  335586 ubuntu.go:193] setting minikube options for container-runtime
	I1105 18:08:17.583365  335586 config.go:182] Loaded profile config "ha-256890": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:08:17.583476  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m04
	I1105 18:08:17.611038  335586 main.go:141] libmachine: Using SSH client type: native
	I1105 18:08:17.611275  335586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil>  [] 0s} 127.0.0.1 33190 <nil> <nil>}
	I1105 18:08:17.611290  335586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 18:08:17.885000  335586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 18:08:17.885024  335586 machine.go:96] duration metric: took 4.606170993s to provisionDockerMachine
	I1105 18:08:17.885036  335586 start.go:293] postStartSetup for "ha-256890-m04" (driver="docker")
	I1105 18:08:17.885048  335586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 18:08:17.885118  335586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 18:08:17.885166  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m04
	I1105 18:08:17.906663  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890-m04/id_rsa Username:docker}
	I1105 18:08:18.002073  335586 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 18:08:18.006826  335586 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1105 18:08:18.006868  335586 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1105 18:08:18.006879  335586 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1105 18:08:18.006886  335586 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1105 18:08:18.006897  335586 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-279806/.minikube/addons for local assets ...
	I1105 18:08:18.006961  335586 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-279806/.minikube/files for local assets ...
	I1105 18:08:18.007042  335586 filesync.go:149] local asset: /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem -> 2851882.pem in /etc/ssl/certs
	I1105 18:08:18.007054  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem -> /etc/ssl/certs/2851882.pem
	I1105 18:08:18.007155  335586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1105 18:08:18.020054  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem --> /etc/ssl/certs/2851882.pem (1708 bytes)
	I1105 18:08:18.049746  335586 start.go:296] duration metric: took 164.693489ms for postStartSetup
	I1105 18:08:18.049833  335586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 18:08:18.049887  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m04
	I1105 18:08:18.068841  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890-m04/id_rsa Username:docker}
	I1105 18:08:18.159437  335586 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1105 18:08:18.164345  335586 fix.go:56] duration metric: took 5.292708943s for fixHost
	I1105 18:08:18.164373  335586 start.go:83] releasing machines lock for "ha-256890-m04", held for 5.29276093s
	I1105 18:08:18.164444  335586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-256890-m04
	I1105 18:08:18.184033  335586 out.go:177] * Found network options:
	I1105 18:08:18.187314  335586 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1105 18:08:18.189878  335586 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 18:08:18.189915  335586 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 18:08:18.189927  335586 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 18:08:18.189951  335586 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 18:08:18.189966  335586 proxy.go:119] fail to check proxy env: Error ip not in block
	W1105 18:08:18.189976  335586 proxy.go:119] fail to check proxy env: Error ip not in block
	I1105 18:08:18.190052  335586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 18:08:18.190094  335586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 18:08:18.190177  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m04
	I1105 18:08:18.190098  335586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m04
	I1105 18:08:18.212034  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890-m04/id_rsa Username:docker}
	I1105 18:08:18.222394  335586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33190 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890-m04/id_rsa Username:docker}
	I1105 18:08:18.481267  335586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 18:08:18.486537  335586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:08:18.495396  335586 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1105 18:08:18.495476  335586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 18:08:18.505491  335586 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1105 18:08:18.505516  335586 start.go:495] detecting cgroup driver to use...
	I1105 18:08:18.505553  335586 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1105 18:08:18.505605  335586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 18:08:18.519891  335586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 18:08:18.531395  335586 docker.go:217] disabling cri-docker service (if available) ...
	I1105 18:08:18.531463  335586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 18:08:18.544407  335586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 18:08:18.557576  335586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 18:08:18.662342  335586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 18:08:18.767656  335586 docker.go:233] disabling docker service ...
	I1105 18:08:18.767734  335586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 18:08:18.782772  335586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 18:08:18.795555  335586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 18:08:18.891620  335586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 18:08:18.989310  335586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 18:08:19.003222  335586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 18:08:19.031659  335586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 18:08:19.031735  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:08:19.043249  335586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 18:08:19.043330  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:08:19.055376  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:08:19.066218  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:08:19.079591  335586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 18:08:19.089752  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:08:19.103420  335586 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:08:19.114501  335586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 18:08:19.125725  335586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 18:08:19.134803  335586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 18:08:19.144089  335586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:08:19.229065  335586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 18:08:19.361786  335586 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 18:08:19.361861  335586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 18:08:19.366486  335586 start.go:563] Will wait 60s for crictl version
	I1105 18:08:19.366550  335586 ssh_runner.go:195] Run: which crictl
	I1105 18:08:19.370362  335586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 18:08:19.417748  335586 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1105 18:08:19.417844  335586 ssh_runner.go:195] Run: crio --version
	I1105 18:08:19.458227  335586 ssh_runner.go:195] Run: crio --version
	I1105 18:08:19.505710  335586 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1105 18:08:19.508328  335586 out.go:177]   - env NO_PROXY=192.168.49.2
	I1105 18:08:19.510985  335586 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1105 18:08:19.513697  335586 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1105 18:08:19.516291  335586 cli_runner.go:164] Run: docker network inspect ha-256890 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1105 18:08:19.544245  335586 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1105 18:08:19.548071  335586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:08:19.560831  335586 mustload.go:65] Loading cluster: ha-256890
	I1105 18:08:19.561069  335586 config.go:182] Loaded profile config "ha-256890": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:08:19.561341  335586 cli_runner.go:164] Run: docker container inspect ha-256890 --format={{.State.Status}}
	I1105 18:08:19.579922  335586 host.go:66] Checking if "ha-256890" exists ...
	I1105 18:08:19.580316  335586 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890 for IP: 192.168.49.5
	I1105 18:08:19.580334  335586 certs.go:194] generating shared ca certs ...
	I1105 18:08:19.580349  335586 certs.go:226] acquiring lock for ca certs: {Name:mk7e394808202081d7250bf8ad59a3f119279ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 18:08:19.580527  335586 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.key
	I1105 18:08:19.580600  335586 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.key
	I1105 18:08:19.580670  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1105 18:08:19.580688  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1105 18:08:19.580699  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1105 18:08:19.580712  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1105 18:08:19.580790  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/285188.pem (1338 bytes)
	W1105 18:08:19.580834  335586 certs.go:480] ignoring /home/jenkins/minikube-integration/19910-279806/.minikube/certs/285188_empty.pem, impossibly tiny 0 bytes
	I1105 18:08:19.580847  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 18:08:19.580871  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/ca.pem (1078 bytes)
	I1105 18:08:19.580898  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/cert.pem (1123 bytes)
	I1105 18:08:19.580923  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/key.pem (1679 bytes)
	I1105 18:08:19.580972  335586 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem (1708 bytes)
	I1105 18:08:19.581004  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem -> /usr/share/ca-certificates/2851882.pem
	I1105 18:08:19.581020  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:08:19.581037  335586 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19910-279806/.minikube/certs/285188.pem -> /usr/share/ca-certificates/285188.pem
	I1105 18:08:19.581057  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 18:08:19.607641  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 18:08:19.635490  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 18:08:19.662402  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1105 18:08:19.690195  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/ssl/certs/2851882.pem --> /usr/share/ca-certificates/2851882.pem (1708 bytes)
	I1105 18:08:19.721409  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 18:08:19.747012  335586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-279806/.minikube/certs/285188.pem --> /usr/share/ca-certificates/285188.pem (1338 bytes)
	I1105 18:08:19.775323  335586 ssh_runner.go:195] Run: openssl version
	I1105 18:08:19.780669  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2851882.pem && ln -fs /usr/share/ca-certificates/2851882.pem /etc/ssl/certs/2851882.pem"
	I1105 18:08:19.790846  335586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2851882.pem
	I1105 18:08:19.794514  335586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  5 17:57 /usr/share/ca-certificates/2851882.pem
	I1105 18:08:19.794582  335586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2851882.pem
	I1105 18:08:19.801417  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2851882.pem /etc/ssl/certs/3ec20f2e.0"
	I1105 18:08:19.810811  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 18:08:19.821468  335586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:08:19.825036  335586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:47 /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:08:19.825162  335586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 18:08:19.832352  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 18:08:19.843593  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/285188.pem && ln -fs /usr/share/ca-certificates/285188.pem /etc/ssl/certs/285188.pem"
	I1105 18:08:19.853811  335586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/285188.pem
	I1105 18:08:19.857358  335586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  5 17:57 /usr/share/ca-certificates/285188.pem
	I1105 18:08:19.857447  335586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/285188.pem
	I1105 18:08:19.864463  335586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/285188.pem /etc/ssl/certs/51391683.0"
	I1105 18:08:19.873846  335586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 18:08:19.877215  335586 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 18:08:19.877261  335586 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.2  false true} ...
	I1105 18:08:19.877382  335586 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-256890-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-256890 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 18:08:19.877450  335586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 18:08:19.887734  335586 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 18:08:19.887866  335586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1105 18:08:19.897243  335586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1105 18:08:19.916327  335586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 18:08:19.935846  335586 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1105 18:08:19.939592  335586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 18:08:19.953765  335586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:08:20.046955  335586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:08:20.062960  335586 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1105 18:08:20.063495  335586 config.go:182] Loaded profile config "ha-256890": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:08:20.065998  335586 out.go:177] * Verifying Kubernetes components...
	I1105 18:08:20.068687  335586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 18:08:20.175058  335586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 18:08:20.188256  335586 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19910-279806/kubeconfig
	I1105 18:08:20.188559  335586 kapi.go:59] client config for ha-256890: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/client.crt", KeyFile:"/home/jenkins/minikube-integration/19910-279806/.minikube/profiles/ha-256890/client.key", CAFile:"/home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e9d0d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1105 18:08:20.188653  335586 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1105 18:08:20.188891  335586 node_ready.go:35] waiting up to 6m0s for node "ha-256890-m04" to be "Ready" ...
	I1105 18:08:20.188973  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:20.188984  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:20.188993  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:20.189002  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:20.194362  335586 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:08:20.689823  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:20.689898  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:20.689925  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:20.689964  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:20.694806  335586 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1105 18:08:21.189170  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:21.189191  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:21.189201  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:21.189206  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:21.192229  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:21.689367  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:21.689390  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:21.689400  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:21.689406  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:21.692321  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:22.189735  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:22.189758  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:22.189767  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:22.189772  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:22.192488  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:22.193142  335586 node_ready.go:53] node "ha-256890-m04" has status "Ready":"Unknown"
	I1105 18:08:22.689871  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:22.689894  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:22.689902  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:22.689905  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:22.692649  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:23.189140  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:23.189162  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:23.189171  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:23.189182  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:23.197992  335586 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1105 18:08:23.689853  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:23.689874  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:23.689884  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:23.689889  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:23.692473  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:24.189965  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:24.189988  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:24.189998  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:24.190003  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:24.192962  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:24.193778  335586 node_ready.go:53] node "ha-256890-m04" has status "Ready":"Unknown"
	I1105 18:08:24.689424  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:24.689447  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:24.689458  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:24.689462  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:24.697854  335586 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1105 18:08:25.189346  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:25.189372  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:25.189383  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:25.189388  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:25.192637  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:25.689874  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:25.689901  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:25.689913  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:25.689918  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:25.692798  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:26.189705  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:26.189740  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:26.189750  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:26.189755  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:26.192569  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:26.690075  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:26.690099  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:26.690109  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:26.690114  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:26.693335  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:26.693976  335586 node_ready.go:53] node "ha-256890-m04" has status "Ready":"Unknown"
	I1105 18:08:27.189673  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:27.189698  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:27.189709  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:27.189713  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:27.192309  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:27.192985  335586 node_ready.go:49] node "ha-256890-m04" has status "Ready":"True"
	I1105 18:08:27.193003  335586 node_ready.go:38] duration metric: took 7.004093367s for node "ha-256890-m04" to be "Ready" ...
	I1105 18:08:27.193014  335586 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:08:27.193086  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1105 18:08:27.193099  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:27.193113  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:27.193120  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:27.198436  335586 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1105 18:08:27.208595  335586 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2lr9d" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:27.208850  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-2lr9d
	I1105 18:08:27.208861  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:27.208870  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:27.208877  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:27.212295  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:27.213054  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:08:27.213074  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:27.213084  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:27.213088  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:27.215927  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:27.216545  335586 pod_ready.go:98] node "ha-256890" hosting pod "coredns-7c65d6cfc9-2lr9d" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:27.216568  335586 pod_ready.go:82] duration metric: took 7.799279ms for pod "coredns-7c65d6cfc9-2lr9d" in "kube-system" namespace to be "Ready" ...
	E1105 18:08:27.216579  335586 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-256890" hosting pod "coredns-7c65d6cfc9-2lr9d" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:27.216586  335586 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mtrp9" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:27.216717  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mtrp9
	I1105 18:08:27.216732  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:27.216741  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:27.216745  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:27.219234  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:27.219970  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:08:27.219987  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:27.219996  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:27.220001  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:27.222551  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:27.223185  335586 pod_ready.go:98] node "ha-256890" hosting pod "coredns-7c65d6cfc9-mtrp9" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:27.223209  335586 pod_ready.go:82] duration metric: took 6.611804ms for pod "coredns-7c65d6cfc9-mtrp9" in "kube-system" namespace to be "Ready" ...
	E1105 18:08:27.223237  335586 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-256890" hosting pod "coredns-7c65d6cfc9-mtrp9" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:27.223252  335586 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-256890" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:27.223331  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890
	I1105 18:08:27.223342  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:27.223351  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:27.223356  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:27.225881  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:27.226668  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:08:27.226682  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:27.226691  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:27.226696  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:27.229165  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:27.229764  335586 pod_ready.go:98] node "ha-256890" hosting pod "etcd-ha-256890" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:27.229784  335586 pod_ready.go:82] duration metric: took 6.524821ms for pod "etcd-ha-256890" in "kube-system" namespace to be "Ready" ...
	E1105 18:08:27.229794  335586 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-256890" hosting pod "etcd-ha-256890" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:27.229802  335586 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:27.229864  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m02
	I1105 18:08:27.229878  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:27.229885  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:27.229890  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:27.232300  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:27.233138  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:08:27.233157  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:27.233166  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:27.233171  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:27.235611  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:27.236259  335586 pod_ready.go:93] pod "etcd-ha-256890-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:08:27.236279  335586 pod_ready.go:82] duration metric: took 6.466269ms for pod "etcd-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:27.236291  335586 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:27.390520  335586 request.go:632] Waited for 154.156353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:08:27.390583  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-256890-m03
	I1105 18:08:27.390589  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:27.390597  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:27.390601  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:27.393592  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:27.590604  335586 request.go:632] Waited for 196.337291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:27.590691  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:27.590706  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:27.590716  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:27.590721  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:27.593730  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:27.594286  335586 pod_ready.go:93] pod "etcd-ha-256890-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:08:27.594308  335586 pod_ready.go:82] duration metric: took 358.008827ms for pod "etcd-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:27.594332  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-256890" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:27.790020  335586 request.go:632] Waited for 195.607942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256890
	I1105 18:08:27.790076  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256890
	I1105 18:08:27.790087  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:27.790096  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:27.790104  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:27.793159  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:27.990402  335586 request.go:632] Waited for 196.33446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:08:27.990474  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:08:27.990486  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:27.990496  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:27.990501  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:27.993397  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:27.993995  335586 pod_ready.go:98] node "ha-256890" hosting pod "kube-apiserver-ha-256890" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:27.994017  335586 pod_ready.go:82] duration metric: took 399.671675ms for pod "kube-apiserver-ha-256890" in "kube-system" namespace to be "Ready" ...
	E1105 18:08:27.994027  335586 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-256890" hosting pod "kube-apiserver-ha-256890" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:27.994035  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:28.190515  335586 request.go:632] Waited for 196.416535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256890-m02
	I1105 18:08:28.190581  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256890-m02
	I1105 18:08:28.190591  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:28.190600  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:28.190611  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:28.193906  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:28.390003  335586 request.go:632] Waited for 195.334529ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:08:28.390064  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:08:28.390079  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:28.390104  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:28.390114  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:28.392933  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:28.393606  335586 pod_ready.go:93] pod "kube-apiserver-ha-256890-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:08:28.393624  335586 pod_ready.go:82] duration metric: took 399.580924ms for pod "kube-apiserver-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:28.393638  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:28.590496  335586 request.go:632] Waited for 196.785381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256890-m03
	I1105 18:08:28.590607  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-256890-m03
	I1105 18:08:28.590619  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:28.590627  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:28.590631  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:28.594097  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:28.790144  335586 request.go:632] Waited for 195.330123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:28.790199  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:28.790206  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:28.790222  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:28.790226  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:28.793230  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:28.793797  335586 pod_ready.go:93] pod "kube-apiserver-ha-256890-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:08:28.793817  335586 pod_ready.go:82] duration metric: took 400.166485ms for pod "kube-apiserver-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:28.793830  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-256890" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:28.990714  335586 request.go:632] Waited for 196.81789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256890
	I1105 18:08:28.990781  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256890
	I1105 18:08:28.990793  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:28.990802  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:28.990806  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:28.993749  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:29.189914  335586 request.go:632] Waited for 195.212648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:08:29.190043  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:08:29.190055  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:29.190064  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:29.190070  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:29.193013  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:29.193743  335586 pod_ready.go:98] node "ha-256890" hosting pod "kube-controller-manager-ha-256890" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:29.193765  335586 pod_ready.go:82] duration metric: took 399.92695ms for pod "kube-controller-manager-ha-256890" in "kube-system" namespace to be "Ready" ...
	E1105 18:08:29.193777  335586 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-256890" hosting pod "kube-controller-manager-ha-256890" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:29.193787  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:29.390325  335586 request.go:632] Waited for 196.460046ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256890-m02
	I1105 18:08:29.390408  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256890-m02
	I1105 18:08:29.390418  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:29.390427  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:29.390434  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:29.393425  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:29.590688  335586 request.go:632] Waited for 196.327869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:08:29.590744  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:08:29.590750  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:29.590759  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:29.590769  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:29.593705  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:29.594225  335586 pod_ready.go:93] pod "kube-controller-manager-ha-256890-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:08:29.594244  335586 pod_ready.go:82] duration metric: took 400.441428ms for pod "kube-controller-manager-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:29.594256  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:29.790090  335586 request.go:632] Waited for 195.751966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256890-m03
	I1105 18:08:29.790251  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-256890-m03
	I1105 18:08:29.790261  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:29.790270  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:29.790281  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:29.793083  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:29.990267  335586 request.go:632] Waited for 196.19785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:29.990347  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:29.990359  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:29.990368  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:29.990376  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:29.992971  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:29.993584  335586 pod_ready.go:93] pod "kube-controller-manager-ha-256890-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:08:29.993608  335586 pod_ready.go:82] duration metric: took 399.336136ms for pod "kube-controller-manager-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:29.993638  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8wk8p" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:30.190069  335586 request.go:632] Waited for 196.362386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8wk8p
	I1105 18:08:30.190125  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8wk8p
	I1105 18:08:30.190136  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:30.190145  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:30.190153  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:30.192999  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:30.389811  335586 request.go:632] Waited for 196.141947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:08:30.389912  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:08:30.389932  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:30.389954  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:30.389968  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:30.392976  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:30.393581  335586 pod_ready.go:98] node "ha-256890" hosting pod "kube-proxy-8wk8p" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:30.393605  335586 pod_ready.go:82] duration metric: took 399.954122ms for pod "kube-proxy-8wk8p" in "kube-system" namespace to be "Ready" ...
	E1105 18:08:30.393619  335586 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-256890" hosting pod "kube-proxy-8wk8p" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:30.393632  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8xxrt" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:30.590474  335586 request.go:632] Waited for 196.771914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xxrt
	I1105 18:08:30.590532  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xxrt
	I1105 18:08:30.590542  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:30.590557  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:30.590563  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:30.593516  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:30.789724  335586 request.go:632] Waited for 195.175224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:30.789785  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:30.789791  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:30.789807  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:30.789813  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:30.792754  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:30.793445  335586 pod_ready.go:93] pod "kube-proxy-8xxrt" in "kube-system" namespace has status "Ready":"True"
	I1105 18:08:30.793469  335586 pod_ready.go:82] duration metric: took 399.828164ms for pod "kube-proxy-8xxrt" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:30.793481  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bvn86" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:30.990461  335586 request.go:632] Waited for 196.883635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvn86
	I1105 18:08:30.990572  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvn86
	I1105 18:08:30.990603  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:30.990632  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:30.990653  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:30.994230  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:31.190316  335586 request.go:632] Waited for 195.290792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:31.190372  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:31.190381  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:31.190390  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:31.190398  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:31.193214  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:31.389754  335586 request.go:632] Waited for 95.224958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvn86
	I1105 18:08:31.389835  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvn86
	I1105 18:08:31.389845  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:31.389854  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:31.389878  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:31.393006  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:31.590278  335586 request.go:632] Waited for 196.321187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:31.590336  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:31.590342  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:31.590352  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:31.590356  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:31.593048  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:31.793687  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvn86
	I1105 18:08:31.793711  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:31.793721  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:31.793726  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:31.796804  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:31.990174  335586 request.go:632] Waited for 192.307069ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:31.990233  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:31.990242  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:31.990251  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:31.990258  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:31.992992  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:32.293823  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvn86
	I1105 18:08:32.293850  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:32.293860  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:32.293865  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:32.304314  335586 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1105 18:08:32.390559  335586 request.go:632] Waited for 85.170549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:32.390648  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:32.390686  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:32.390702  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:32.390709  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:32.394004  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:32.793671  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvn86
	I1105 18:08:32.793694  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:32.793704  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:32.793709  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:32.796772  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:32.797581  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:32.797628  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:32.797644  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:32.797648  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:32.800375  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:32.801107  335586 pod_ready.go:103] pod "kube-proxy-bvn86" in "kube-system" namespace has status "Ready":"False"
	I1105 18:08:33.294421  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvn86
	I1105 18:08:33.294497  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:33.294521  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:33.294551  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:33.297588  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:33.298483  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:33.298501  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:33.298511  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:33.298515  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:33.300955  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:33.793862  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvn86
	I1105 18:08:33.793885  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:33.793895  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:33.793900  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:33.796652  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:33.797622  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:33.797639  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:33.797648  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:33.797653  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:33.800221  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:34.293876  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvn86
	I1105 18:08:34.293897  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:34.293908  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:34.293915  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:34.296490  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:34.297497  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:34.297515  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:34.297525  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:34.297530  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:34.299930  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:34.793846  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvn86
	I1105 18:08:34.793868  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:34.793878  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:34.793882  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:34.796953  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:34.797725  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:34.797748  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:34.797759  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:34.797764  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:34.800365  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:35.294186  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvn86
	I1105 18:08:35.294207  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:35.294218  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:35.294223  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:35.297094  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:35.297821  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:35.297841  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:35.297852  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:35.297856  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:35.300403  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:35.301157  335586 pod_ready.go:103] pod "kube-proxy-bvn86" in "kube-system" namespace has status "Ready":"False"
	I1105 18:08:35.793880  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvn86
	I1105 18:08:35.793952  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:35.793977  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:35.794005  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:35.797927  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:35.799361  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:35.799378  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:35.799388  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:35.799393  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:35.802380  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:36.294574  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bvn86
	I1105 18:08:36.294602  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:36.294613  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:36.294617  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:36.332451  335586 round_trippers.go:574] Response Status: 200 OK in 37 milliseconds
	I1105 18:08:36.336273  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m04
	I1105 18:08:36.336342  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:36.336368  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:36.336389  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:36.350958  335586 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1105 18:08:36.352125  335586 pod_ready.go:93] pod "kube-proxy-bvn86" in "kube-system" namespace has status "Ready":"True"
	I1105 18:08:36.352149  335586 pod_ready.go:82] duration metric: took 5.558659851s for pod "kube-proxy-bvn86" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:36.352161  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fkfkc" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:36.352227  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkfkc
	I1105 18:08:36.352237  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:36.352245  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:36.352250  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:36.376086  335586 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I1105 18:08:36.381568  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:08:36.381589  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:36.381599  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:36.381604  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:36.392500  335586 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1105 18:08:36.393126  335586 pod_ready.go:93] pod "kube-proxy-fkfkc" in "kube-system" namespace has status "Ready":"True"
	I1105 18:08:36.393156  335586 pod_ready.go:82] duration metric: took 40.987627ms for pod "kube-proxy-fkfkc" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:36.393169  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-256890" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:36.393284  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256890
	I1105 18:08:36.393296  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:36.393305  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:36.393321  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:36.404519  335586 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1105 18:08:36.406514  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890
	I1105 18:08:36.406539  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:36.406587  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:36.406599  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:36.413212  335586 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1105 18:08:36.414131  335586 pod_ready.go:98] node "ha-256890" hosting pod "kube-scheduler-ha-256890" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:36.414157  335586 pod_ready.go:82] duration metric: took 20.954308ms for pod "kube-scheduler-ha-256890" in "kube-system" namespace to be "Ready" ...
	E1105 18:08:36.414195  335586 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-256890" hosting pod "kube-scheduler-ha-256890" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-256890" has status "Ready":"Unknown"
	I1105 18:08:36.414209  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:36.590584  335586 request.go:632] Waited for 176.308087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256890-m02
	I1105 18:08:36.590662  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256890-m02
	I1105 18:08:36.590691  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:36.590706  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:36.590710  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:36.593425  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:36.790435  335586 request.go:632] Waited for 196.353747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:08:36.790501  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m02
	I1105 18:08:36.790507  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:36.790516  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:36.790521  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:36.798272  335586 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1105 18:08:36.799580  335586 pod_ready.go:93] pod "kube-scheduler-ha-256890-m02" in "kube-system" namespace has status "Ready":"True"
	I1105 18:08:36.799641  335586 pod_ready.go:82] duration metric: took 385.422145ms for pod "kube-scheduler-ha-256890-m02" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:36.799660  335586 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:36.990070  335586 request.go:632] Waited for 190.324116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256890-m03
	I1105 18:08:36.990134  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-256890-m03
	I1105 18:08:36.990143  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:36.990151  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:36.990155  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:36.993252  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:37.189968  335586 request.go:632] Waited for 196.133044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:37.190083  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-256890-m03
	I1105 18:08:37.190102  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:37.190110  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:37.190115  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:37.192841  335586 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1105 18:08:37.193654  335586 pod_ready.go:93] pod "kube-scheduler-ha-256890-m03" in "kube-system" namespace has status "Ready":"True"
	I1105 18:08:37.193675  335586 pod_ready.go:82] duration metric: took 394.006328ms for pod "kube-scheduler-ha-256890-m03" in "kube-system" namespace to be "Ready" ...
	I1105 18:08:37.193690  335586 pod_ready.go:39] duration metric: took 10.000660248s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 18:08:37.193704  335586 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 18:08:37.193786  335586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:08:37.207942  335586 system_svc.go:56] duration metric: took 14.229617ms WaitForService to wait for kubelet
	I1105 18:08:37.207970  335586 kubeadm.go:582] duration metric: took 17.144956196s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 18:08:37.207988  335586 node_conditions.go:102] verifying NodePressure condition ...
	I1105 18:08:37.390548  335586 request.go:632] Waited for 182.488479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1105 18:08:37.390630  335586 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I1105 18:08:37.390637  335586 round_trippers.go:469] Request Headers:
	I1105 18:08:37.390645  335586 round_trippers.go:473]     Accept: application/json, */*
	I1105 18:08:37.390649  335586 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1105 18:08:37.393982  335586 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1105 18:08:37.395440  335586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1105 18:08:37.395468  335586 node_conditions.go:123] node cpu capacity is 2
	I1105 18:08:37.395481  335586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1105 18:08:37.395486  335586 node_conditions.go:123] node cpu capacity is 2
	I1105 18:08:37.395490  335586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1105 18:08:37.395493  335586 node_conditions.go:123] node cpu capacity is 2
	I1105 18:08:37.395497  335586 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1105 18:08:37.395502  335586 node_conditions.go:123] node cpu capacity is 2
	I1105 18:08:37.395513  335586 node_conditions.go:105] duration metric: took 187.519764ms to run NodePressure ...
	I1105 18:08:37.395528  335586 start.go:241] waiting for startup goroutines ...
	I1105 18:08:37.395556  335586 start.go:255] writing updated cluster config ...
	I1105 18:08:37.395892  335586 ssh_runner.go:195] Run: rm -f paused
	I1105 18:08:37.463684  335586 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 18:08:37.468831  335586 out.go:177] * Done! kubectl is now configured to use "ha-256890" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 05 18:07:22 ha-256890 crio[645]: time="2024-11-05 18:07:22.245801197Z" level=info msg="Started container" PID=1809 containerID=e1cd70ed2ce2f0d283da0571c4b534749bd085ce2b79713949124329913a0ea9 description=kube-system/kube-vip-ha-256890/kube-vip id=baebf607-3ab3-4a80-8b7c-6ddfb635c563 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c4023f1fe607eeb207e6cdbdda8355d110c74d753e3217c4c8710f74b2667ee7
	Nov 05 18:07:22 ha-256890 crio[645]: time="2024-11-05 18:07:22.297322259Z" level=info msg="Created container 3cf6cce389ad655760098a7c9e2c23f35fa09dfc842f6e2c4dfb2413ce5b4ec7: kube-system/storage-provisioner/storage-provisioner" id=a04812f7-b911-4607-9062-65326a87c9b0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 05 18:07:22 ha-256890 crio[645]: time="2024-11-05 18:07:22.297986245Z" level=info msg="Starting container: 3cf6cce389ad655760098a7c9e2c23f35fa09dfc842f6e2c4dfb2413ce5b4ec7" id=003103f5-c195-4a5f-aa11-4dbf4a97d420 name=/runtime.v1.RuntimeService/StartContainer
	Nov 05 18:07:22 ha-256890 crio[645]: time="2024-11-05 18:07:22.344459427Z" level=info msg="Started container" PID=1830 containerID=3cf6cce389ad655760098a7c9e2c23f35fa09dfc842f6e2c4dfb2413ce5b4ec7 description=kube-system/storage-provisioner/storage-provisioner id=003103f5-c195-4a5f-aa11-4dbf4a97d420 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0c04b5f79f52e893bd46b906881175206bac5fc17c06150ba58f26d9e33174dd
	Nov 05 18:07:25 ha-256890 crio[645]: time="2024-11-05 18:07:25.900325093Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.2" id=d7fa151d-44a2-4823-8c59-ee8baf35710b name=/runtime.v1.ImageService/ImageStatus
	Nov 05 18:07:25 ha-256890 crio[645]: time="2024-11-05 18:07:25.900542850Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.2],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752 registry.k8s.io/kube-controller-manager@sha256:b8d51076af39954cadc718ae40bd8a736ae5ad4e0654465ae91886cad3a9b602],Size_:86996294,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=d7fa151d-44a2-4823-8c59-ee8baf35710b name=/runtime.v1.ImageService/ImageStatus
	Nov 05 18:07:25 ha-256890 crio[645]: time="2024-11-05 18:07:25.901319157Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.2" id=e4cc0c85-0771-4616-8d30-f86ca9c10196 name=/runtime.v1.ImageService/ImageStatus
	Nov 05 18:07:25 ha-256890 crio[645]: time="2024-11-05 18:07:25.901498341Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.2],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752 registry.k8s.io/kube-controller-manager@sha256:b8d51076af39954cadc718ae40bd8a736ae5ad4e0654465ae91886cad3a9b602],Size_:86996294,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=e4cc0c85-0771-4616-8d30-f86ca9c10196 name=/runtime.v1.ImageService/ImageStatus
	Nov 05 18:07:25 ha-256890 crio[645]: time="2024-11-05 18:07:25.902142594Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-256890/kube-controller-manager" id=276ebbb1-5c43-4e28-b971-939c9158d38d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 05 18:07:25 ha-256890 crio[645]: time="2024-11-05 18:07:25.902243402Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 05 18:07:26 ha-256890 crio[645]: time="2024-11-05 18:07:26.019219688Z" level=info msg="Created container 31b1b9af4fa9068fe838f2616a8d349d9496c16557e8905e7258862927579fe2: kube-system/kube-controller-manager-ha-256890/kube-controller-manager" id=276ebbb1-5c43-4e28-b971-939c9158d38d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 05 18:07:26 ha-256890 crio[645]: time="2024-11-05 18:07:26.019828527Z" level=info msg="Starting container: 31b1b9af4fa9068fe838f2616a8d349d9496c16557e8905e7258862927579fe2" id=c67bc66a-bd82-4cb5-bbc3-d60f0d0b6d34 name=/runtime.v1.RuntimeService/StartContainer
	Nov 05 18:07:26 ha-256890 crio[645]: time="2024-11-05 18:07:26.033564020Z" level=info msg="Started container" PID=1887 containerID=31b1b9af4fa9068fe838f2616a8d349d9496c16557e8905e7258862927579fe2 description=kube-system/kube-controller-manager-ha-256890/kube-controller-manager id=c67bc66a-bd82-4cb5-bbc3-d60f0d0b6d34 name=/runtime.v1.RuntimeService/StartContainer sandboxID=95ab07d267882381c9cbd6b4c3e96e5c4f7416c4bf5257ea646d9bfef68530a9
	Nov 05 18:07:32 ha-256890 crio[645]: time="2024-11-05 18:07:32.071535451Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Nov 05 18:07:32 ha-256890 crio[645]: time="2024-11-05 18:07:32.090680620Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 05 18:07:32 ha-256890 crio[645]: time="2024-11-05 18:07:32.090721597Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 05 18:07:32 ha-256890 crio[645]: time="2024-11-05 18:07:32.090739697Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Nov 05 18:07:32 ha-256890 crio[645]: time="2024-11-05 18:07:32.097481949Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 05 18:07:32 ha-256890 crio[645]: time="2024-11-05 18:07:32.097516764Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 05 18:07:32 ha-256890 crio[645]: time="2024-11-05 18:07:32.097532436Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Nov 05 18:07:32 ha-256890 crio[645]: time="2024-11-05 18:07:32.100937412Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 05 18:07:32 ha-256890 crio[645]: time="2024-11-05 18:07:32.100972358Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 05 18:07:32 ha-256890 crio[645]: time="2024-11-05 18:07:32.100987644Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Nov 05 18:07:32 ha-256890 crio[645]: time="2024-11-05 18:07:32.105668137Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 05 18:07:32 ha-256890 crio[645]: time="2024-11-05 18:07:32.105702542Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	31b1b9af4fa90       9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba   About a minute ago   Running             kube-controller-manager   4                   95ab07d267882       kube-controller-manager-ha-256890
	3cf6cce389ad6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Running             storage-provisioner       3                   0c04b5f79f52e       storage-provisioner
	e1cd70ed2ce2f       cdd50ef879ee4760547f6c8631a18e33a8ecdf3261cde9504e2d0b342abfd7ef   About a minute ago   Running             kube-vip                  1                   c4023f1fe607e       kube-vip-ha-256890
	44577be34ba54       f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270   About a minute ago   Running             kube-apiserver            2                   e6bb6a8277a81       kube-apiserver-ha-256890
	8e34ca62c4b3d       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   2 minutes ago        Running             coredns                   1                   b61d01ef0687d       coredns-7c65d6cfc9-2lr9d
	8dd427b49e545       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   2 minutes ago        Running             coredns                   1                   d10487354705c       coredns-7c65d6cfc9-mtrp9
	4facf96ef7950       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   2 minutes ago        Running             busybox                   1                   dc8c02a2fb584       busybox-7dff88458-nwfsj
	c96929fd22cd5       021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba   2 minutes ago        Running             kube-proxy                1                   030a9972b5a09       kube-proxy-8wk8p
	5cc054c7d6505       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   2 minutes ago        Exited              storage-provisioner       2                   0c04b5f79f52e       storage-provisioner
	64b4f3d587b27       55b97e1cbb2a39e125fd41804d8dd0279b34111fe79fd4673ddc92bc97431ca2   2 minutes ago        Running             kindnet-cni               1                   c0255c61cab92       kindnet-gbjp6
	516a93c7ae35a       9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba   2 minutes ago        Exited              kube-controller-manager   3                   95ab07d267882       kube-controller-manager-ha-256890
	092b391ffd457       d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a   2 minutes ago        Running             kube-scheduler            1                   8fd9bce2693fc       kube-scheduler-ha-256890
	51a390a1ff539       cdd50ef879ee4760547f6c8631a18e33a8ecdf3261cde9504e2d0b342abfd7ef   2 minutes ago        Exited              kube-vip                  0                   c4023f1fe607e       kube-vip-ha-256890
	f5a17d6751ba7       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da   2 minutes ago        Running             etcd                      1                   d8128b0362687       etcd-ha-256890
	df1e1adfe09d7       f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270   2 minutes ago        Exited              kube-apiserver            1                   e6bb6a8277a81       kube-apiserver-ha-256890
	
	
	==> coredns [8dd427b49e545c71f28e020af2324c5f2a2306484a8f389beda0d713a8b79044] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51895 - 18235 "HINFO IN 8681323940081506100.3257312353080245754. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03046645s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[358115856]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:06:51.762) (total time: 30001ms):
	Trace[358115856]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:07:21.763)
	Trace[358115856]: [30.001120666s] [30.001120666s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[90559149]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:06:51.762) (total time: 30001ms):
	Trace[90559149]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:07:21.763)
	Trace[90559149]: [30.001043259s] [30.001043259s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1319593983]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:06:51.762) (total time: 30002ms):
	Trace[1319593983]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:07:21.762)
	Trace[1319593983]: [30.00226495s] [30.00226495s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [8e34ca62c4b3d13b4d9fca4fce8ddd0fa3a2107d110867c78d22528a4f102602] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51916 - 57538 "HINFO IN 1534061052357367327.2776302797521020339. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01396593s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[119557312]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:06:51.705) (total time: 30003ms):
	Trace[119557312]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:07:21.706)
	Trace[119557312]: [30.003516651s] [30.003516651s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[852378236]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:06:51.706) (total time: 30005ms):
	Trace[852378236]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:07:21.708)
	Trace[852378236]: [30.005713032s] [30.005713032s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[787434858]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Nov-2024 18:06:51.706) (total time: 30003ms):
	Trace[787434858]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:07:21.707)
	Trace[787434858]: [30.003487736s] [30.003487736s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-256890
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-256890
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-256890
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T18_01_12_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:01:09 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-256890
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:08:46 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 05 Nov 2024 18:06:42 +0000   Tue, 05 Nov 2024 18:07:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 05 Nov 2024 18:06:42 +0000   Tue, 05 Nov 2024 18:07:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 05 Nov 2024 18:06:42 +0000   Tue, 05 Nov 2024 18:07:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 05 Nov 2024 18:06:42 +0000   Tue, 05 Nov 2024 18:07:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-256890
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 6be66fb6ef2946a99f128e9cbafca78e
	  System UUID:                8154d7b5-2c1a-4b5d-8986-e0e3b7a4d0d9
	  Boot ID:                    308934a7-38b0-4c4f-b876-76c17d9b7ecd
	  Kernel Version:             5.15.0-1071-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nwfsj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 coredns-7c65d6cfc9-2lr9d             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     7m36s
	  kube-system                 coredns-7c65d6cfc9-mtrp9             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     7m36s
	  kube-system                 etcd-ha-256890                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7m41s
	  kube-system                 kindnet-gbjp6                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m37s
	  kube-system                 kube-apiserver-ha-256890             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m41s
	  kube-system                 kube-controller-manager-ha-256890    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m41s
	  kube-system                 kube-proxy-8wk8p                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m37s
	  kube-system                 kube-scheduler-ha-256890             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m41s
	  kube-system                 kube-vip-ha-256890                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 119s                   kube-proxy       
	  Normal   Starting                 7m35s                  kube-proxy       
	  Normal   NodeHasSufficientPID     7m41s                  kubelet          Node ha-256890 status is now: NodeHasSufficientPID
	  Normal   Starting                 7m41s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m41s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m41s                  kubelet          Node ha-256890 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m41s                  kubelet          Node ha-256890 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           7m37s                  node-controller  Node ha-256890 event: Registered Node ha-256890 in Controller
	  Normal   NodeReady                7m22s                  kubelet          Node ha-256890 status is now: NodeReady
	  Normal   RegisteredNode           7m7s                   node-controller  Node ha-256890 event: Registered Node ha-256890 in Controller
	  Normal   RegisteredNode           5m57s                  node-controller  Node ha-256890 event: Registered Node ha-256890 in Controller
	  Normal   RegisteredNode           3m26s                  node-controller  Node ha-256890 event: Registered Node ha-256890 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m48s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  2m47s (x8 over 2m48s)  kubelet          Node ha-256890 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x8 over 2m48s)  kubelet          Node ha-256890 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x7 over 2m48s)  kubelet          Node ha-256890 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m9s                   node-controller  Node ha-256890 event: Registered Node ha-256890 in Controller
	  Normal   RegisteredNode           83s                    node-controller  Node ha-256890 event: Registered Node ha-256890 in Controller
	  Normal   NodeNotReady             58s                    node-controller  Node ha-256890 status is now: NodeNotReady
	  Normal   RegisteredNode           53s                    node-controller  Node ha-256890 event: Registered Node ha-256890 in Controller
	
	
	Name:               ha-256890-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-256890-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-256890
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T18_01_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:01:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-256890-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:08:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:06:45 +0000   Tue, 05 Nov 2024 18:01:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:06:45 +0000   Tue, 05 Nov 2024 18:01:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:06:45 +0000   Tue, 05 Nov 2024 18:01:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:06:45 +0000   Tue, 05 Nov 2024 18:02:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-256890-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 b11d68e37faa4ce68d05d87a02d3097e
	  System UUID:                0270f480-594b-400c-9978-6e080374c59a
	  Boot ID:                    308934a7-38b0-4c4f-b876-76c17d9b7ecd
	  Kernel Version:             5.15.0-1071-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-z4gpr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 etcd-ha-256890-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7m15s
	  kube-system                 kindnet-xmj9b                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m17s
	  kube-system                 kube-apiserver-ha-256890-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 kube-controller-manager-ha-256890-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m7s
	  kube-system                 kube-proxy-fkfkc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m17s
	  kube-system                 kube-scheduler-ha-256890-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 kube-vip-ha-256890-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 114s                   kube-proxy       
	  Normal   Starting                 7m11s                  kube-proxy       
	  Normal   Starting                 3m31s                  kube-proxy       
	  Warning  CgroupV1                 7m18s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m17s (x8 over 7m17s)  kubelet          Node ha-256890-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m17s (x8 over 7m17s)  kubelet          Node ha-256890-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m17s (x7 over 7m17s)  kubelet          Node ha-256890-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m12s                  node-controller  Node ha-256890-m02 event: Registered Node ha-256890-m02 in Controller
	  Normal   RegisteredNode           7m7s                   node-controller  Node ha-256890-m02 event: Registered Node ha-256890-m02 in Controller
	  Normal   RegisteredNode           5m57s                  node-controller  Node ha-256890-m02 event: Registered Node ha-256890-m02 in Controller
	  Normal   Starting                 3m58s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m58s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     3m57s (x7 over 3m58s)  kubelet          Node ha-256890-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  3m57s (x8 over 3m58s)  kubelet          Node ha-256890-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m57s (x8 over 3m58s)  kubelet          Node ha-256890-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           3m26s                  node-controller  Node ha-256890-m02 event: Registered Node ha-256890-m02 in Controller
	  Normal   Starting                 2m46s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m46s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  2m46s (x8 over 2m46s)  kubelet          Node ha-256890-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m46s (x8 over 2m46s)  kubelet          Node ha-256890-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m46s (x7 over 2m46s)  kubelet          Node ha-256890-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m9s                   node-controller  Node ha-256890-m02 event: Registered Node ha-256890-m02 in Controller
	  Normal   RegisteredNode           83s                    node-controller  Node ha-256890-m02 event: Registered Node ha-256890-m02 in Controller
	  Normal   RegisteredNode           53s                    node-controller  Node ha-256890-m02 event: Registered Node ha-256890-m02 in Controller
	
	
	Name:               ha-256890-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-256890-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=ha-256890
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_11_05T18_04_00_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 18:03:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-256890-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 18:08:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 18:08:47 +0000   Tue, 05 Nov 2024 18:08:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 18:08:47 +0000   Tue, 05 Nov 2024 18:08:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 18:08:47 +0000   Tue, 05 Nov 2024 18:08:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 18:08:47 +0000   Tue, 05 Nov 2024 18:08:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-256890-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 45fa2a2d3d11403e896d0f2d39817517
	  System UUID:                f7f685b8-b467-4a4c-8261-a892a9fa313c
	  Boot ID:                    308934a7-38b0-4c4f-b876-76c17d9b7ecd
	  Kernel Version:             5.15.0-1071-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-n5g8d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 kindnet-2wtgp              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m52s
	  kube-system                 kube-proxy-bvn86           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 16s                    kube-proxy       
	  Normal   Starting                 4m50s                  kube-proxy       
	  Warning  CgroupV1                 4m53s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 4m53s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  4m53s (x2 over 4m53s)  kubelet          Node ha-256890-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m53s (x2 over 4m53s)  kubelet          Node ha-256890-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m53s (x2 over 4m53s)  kubelet          Node ha-256890-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m52s                  node-controller  Node ha-256890-m04 event: Registered Node ha-256890-m04 in Controller
	  Normal   RegisteredNode           4m52s                  node-controller  Node ha-256890-m04 event: Registered Node ha-256890-m04 in Controller
	  Normal   RegisteredNode           4m52s                  node-controller  Node ha-256890-m04 event: Registered Node ha-256890-m04 in Controller
	  Normal   NodeReady                4m38s                  kubelet          Node ha-256890-m04 status is now: NodeReady
	  Normal   RegisteredNode           3m26s                  node-controller  Node ha-256890-m04 event: Registered Node ha-256890-m04 in Controller
	  Normal   RegisteredNode           2m9s                   node-controller  Node ha-256890-m04 event: Registered Node ha-256890-m04 in Controller
	  Normal   NodeNotReady             89s                    node-controller  Node ha-256890-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           83s                    node-controller  Node ha-256890-m04 event: Registered Node ha-256890-m04 in Controller
	  Normal   RegisteredNode           53s                    node-controller  Node ha-256890-m04 event: Registered Node ha-256890-m04 in Controller
	  Normal   Starting                 38s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 38s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     32s (x7 over 38s)      kubelet          Node ha-256890-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  26s (x8 over 38s)      kubelet          Node ha-256890-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    26s (x8 over 38s)      kubelet          Node ha-256890-m04 status is now: NodeHasNoDiskPressure
	
	
	==> dmesg <==
	[Nov 5 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014171] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.476378] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025481] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.031094] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.017133] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.607383] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.934599] kauditd_printk_skb: 34 callbacks suppressed
	[Nov 5 16:44] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 5 17:18] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [f5a17d6751ba749cd71c1643bd69d6f5e27eefbbd4d1d5e9092d97599beed791] <==
	{"level":"info","ts":"2024-11-05T18:07:52.544831Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5c1de230b692f66d"}
	{"level":"info","ts":"2024-11-05T18:07:52.545172Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5c1de230b692f66d"}
	{"level":"info","ts":"2024-11-05T18:07:52.740596Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"aec36adc501070cc","to":"5c1de230b692f66d","stream-type":"stream Message"}
	{"level":"info","ts":"2024-11-05T18:07:52.740718Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5c1de230b692f66d"}
	{"level":"warn","ts":"2024-11-05T18:07:52.861692Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:46148","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-11-05T18:07:52.894658Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"aec36adc501070cc","to":"5c1de230b692f66d","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-11-05T18:07:52.894757Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5c1de230b692f66d"}
	{"level":"info","ts":"2024-11-05T18:08:42.113339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892 17085482190003241847)"}
	{"level":"info","ts":"2024-11-05T18:08:42.115297Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"5c1de230b692f66d","removed-remote-peer-urls":["https://192.168.49.4:2380"]}
	{"level":"info","ts":"2024-11-05T18:08:42.115360Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"5c1de230b692f66d"}
	{"level":"warn","ts":"2024-11-05T18:08:42.115614Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5c1de230b692f66d"}
	{"level":"info","ts":"2024-11-05T18:08:42.115644Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5c1de230b692f66d"}
	{"level":"warn","ts":"2024-11-05T18:08:42.115737Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5c1de230b692f66d"}
	{"level":"info","ts":"2024-11-05T18:08:42.115759Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5c1de230b692f66d"}
	{"level":"info","ts":"2024-11-05T18:08:42.115897Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"5c1de230b692f66d"}
	{"level":"warn","ts":"2024-11-05T18:08:42.116034Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5c1de230b692f66d","error":"context canceled"}
	{"level":"warn","ts":"2024-11-05T18:08:42.116074Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"5c1de230b692f66d","error":"failed to read 5c1de230b692f66d on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-11-05T18:08:42.116090Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"5c1de230b692f66d"}
	{"level":"warn","ts":"2024-11-05T18:08:42.116197Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5c1de230b692f66d","error":"context canceled"}
	{"level":"info","ts":"2024-11-05T18:08:42.116214Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"5c1de230b692f66d"}
	{"level":"info","ts":"2024-11-05T18:08:42.116226Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"5c1de230b692f66d"}
	{"level":"info","ts":"2024-11-05T18:08:42.116256Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"5c1de230b692f66d"}
	{"level":"info","ts":"2024-11-05T18:08:42.116281Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"5c1de230b692f66d"}
	{"level":"warn","ts":"2024-11-05T18:08:42.164775Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"5c1de230b692f66d"}
	{"level":"warn","ts":"2024-11-05T18:08:42.165141Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"5c1de230b692f66d"}
	
	
	==> kernel <==
	 18:08:52 up  1:51,  0 users,  load average: 2.75, 2.54, 2.28
	Linux ha-256890 5.15.0-1071-aws #77~20.04.1-Ubuntu SMP Thu Oct 3 19:34:36 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [64b4f3d587b276da03ee6ecfea9228e5b25b7f9785adfaf8f8ef852697a41bce] <==
	I1105 18:08:22.068982       1 main.go:324] Node ha-256890-m04 has CIDR [10.244.3.0/24] 
	I1105 18:08:22.069099       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 18:08:22.069112       1 main.go:301] handling current node
	I1105 18:08:32.064677       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 18:08:32.064721       1 main.go:301] handling current node
	I1105 18:08:32.064737       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1105 18:08:32.064747       1 main.go:324] Node ha-256890-m02 has CIDR [10.244.1.0/24] 
	I1105 18:08:32.064962       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1105 18:08:32.064977       1 main.go:324] Node ha-256890-m03 has CIDR [10.244.2.0/24] 
	I1105 18:08:32.065060       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1105 18:08:32.065072       1 main.go:324] Node ha-256890-m04 has CIDR [10.244.3.0/24] 
	I1105 18:08:42.062620       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1105 18:08:42.062827       1 main.go:324] Node ha-256890-m04 has CIDR [10.244.3.0/24] 
	I1105 18:08:42.063088       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 18:08:42.063177       1 main.go:301] handling current node
	I1105 18:08:42.063218       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1105 18:08:42.063275       1 main.go:324] Node ha-256890-m02 has CIDR [10.244.1.0/24] 
	I1105 18:08:42.063430       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1105 18:08:42.063470       1 main.go:324] Node ha-256890-m03 has CIDR [10.244.2.0/24] 
	I1105 18:08:52.062549       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 18:08:52.062592       1 main.go:301] handling current node
	I1105 18:08:52.062609       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1105 18:08:52.062616       1 main.go:324] Node ha-256890-m02 has CIDR [10.244.1.0/24] 
	I1105 18:08:52.062771       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1105 18:08:52.062787       1 main.go:324] Node ha-256890-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [44577be34ba5498db3c5932f91c6cfdd33abe13d925976afabd0546297937af5] <==
	I1105 18:07:20.659577       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1105 18:07:20.659689       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1105 18:07:20.661303       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I1105 18:07:20.662126       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I1105 18:07:20.978941       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1105 18:07:20.979014       1 policy_source.go:224] refreshing policies
	I1105 18:07:20.999195       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1105 18:07:21.027620       1 shared_informer.go:320] Caches are synced for configmaps
	I1105 18:07:21.027705       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1105 18:07:21.028025       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1105 18:07:21.028044       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1105 18:07:21.028533       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1105 18:07:21.029097       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1105 18:07:21.029176       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1105 18:07:21.036254       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1105 18:07:21.062714       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1105 18:07:21.062755       1 aggregator.go:171] initial CRD sync complete...
	I1105 18:07:21.062763       1 autoregister_controller.go:144] Starting autoregister controller
	I1105 18:07:21.062770       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1105 18:07:21.062775       1 cache.go:39] Caches are synced for autoregister controller
	I1105 18:07:21.066367       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1105 18:07:21.639039       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1105 18:07:22.295629       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1105 18:07:22.298610       1 controller.go:615] quota admission added evaluator for: endpoints
	I1105 18:07:22.361139       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [df1e1adfe09d77a7f12a60728a611e57bc95d16da4f0121231a38be47775a34c] <==
	W1105 18:06:37.350411       1 reflector.go:561] storage/cacher.go:/validatingadmissionpolicybindings: failed to list *admissionregistration.ValidatingAdmissionPolicyBinding: etcdserver: leader changed
	E1105 18:06:37.350580       1 cacher.go:478] cacher (validatingadmissionpolicybindings.admissionregistration.k8s.io): unexpected ListAndWatch error: failed to list *admissionregistration.ValidatingAdmissionPolicyBinding: etcdserver: leader changed; reinitializing...
	I1105 18:06:37.726046       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1105 18:06:39.227160       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1105 18:06:39.227213       1 shared_informer.go:320] Caches are synced for configmaps
	W1105 18:06:39.375591       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1105 18:06:39.766040       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1105 18:06:39.895479       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1105 18:06:39.895521       1 policy_source.go:224] refreshing policies
	I1105 18:06:39.948650       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1105 18:06:39.996253       1 controller.go:615] quota admission added evaluator for: endpoints
	I1105 18:06:40.018453       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1105 18:06:40.025481       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1105 18:06:40.124599       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1105 18:06:40.124640       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1105 18:06:40.223835       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1105 18:06:40.223967       1 aggregator.go:171] initial CRD sync complete...
	I1105 18:06:40.223982       1 autoregister_controller.go:144] Starting autoregister controller
	I1105 18:06:40.223989       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1105 18:06:40.324645       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1105 18:06:40.329568       1 cache.go:39] Caches are synced for autoregister controller
	I1105 18:06:40.329804       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1105 18:06:40.330249       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1105 18:06:40.334585       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	F1105 18:07:17.724056       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [31b1b9af4fa9068fe838f2616a8d349d9496c16557e8905e7258862927579fe2] <==
	I1105 18:08:26.940283       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-256890-m04"
	I1105 18:08:26.957080       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-256890-m04"
	I1105 18:08:29.134617       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-256890-m04"
	I1105 18:08:38.318548       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-256890-m03"
	I1105 18:08:38.341398       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-256890-m03"
	I1105 18:08:38.591663       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="206.171482ms"
	I1105 18:08:38.607059       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.346749ms"
	I1105 18:08:38.607164       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="67.332µs"
	I1105 18:08:38.620577       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="62.68µs"
	I1105 18:08:38.620660       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.077µs"
	I1105 18:08:38.683712       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.434632ms"
	I1105 18:08:38.683996       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="67.356µs"
	I1105 18:08:40.471833       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.842µs"
	I1105 18:08:41.321115       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.484µs"
	I1105 18:08:41.334932       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.720687ms"
	I1105 18:08:42.287408       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.760895ms"
	I1105 18:08:42.287585       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="63.008µs"
	I1105 18:08:45.432409       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-256890-m04"
	I1105 18:08:45.432781       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-256890-m03"
	I1105 18:08:47.262515       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-256890-m04"
	E1105 18:08:49.907025       1 gc_controller.go:151] "Failed to get node" err="node \"ha-256890-m03\" not found" logger="pod-garbage-collector-controller" node="ha-256890-m03"
	E1105 18:08:49.907058       1 gc_controller.go:151] "Failed to get node" err="node \"ha-256890-m03\" not found" logger="pod-garbage-collector-controller" node="ha-256890-m03"
	E1105 18:08:49.907066       1 gc_controller.go:151] "Failed to get node" err="node \"ha-256890-m03\" not found" logger="pod-garbage-collector-controller" node="ha-256890-m03"
	E1105 18:08:49.907072       1 gc_controller.go:151] "Failed to get node" err="node \"ha-256890-m03\" not found" logger="pod-garbage-collector-controller" node="ha-256890-m03"
	E1105 18:08:49.907077       1 gc_controller.go:151] "Failed to get node" err="node \"ha-256890-m03\" not found" logger="pod-garbage-collector-controller" node="ha-256890-m03"
	
	
	==> kube-controller-manager [516a93c7ae35a98e6dea501fc92c738c5aca8fff05e4bba5254c54ed25d673cf] <==
	I1105 18:06:51.995665       1 serving.go:386] Generated self-signed cert in-memory
	I1105 18:06:53.635172       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1105 18:06:53.635203       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:06:53.636715       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1105 18:06:53.636858       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1105 18:06:53.637088       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1105 18:06:53.637175       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1105 18:07:03.653853       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [c96929fd22cd57afc0ee00b107f330a7f859574788d09279260fed5da34306a1] <==
	I1105 18:06:51.969591       1 server_linux.go:66] "Using iptables proxy"
	I1105 18:06:52.268981       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1105 18:06:52.269127       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 18:06:52.315941       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1105 18:06:52.316073       1 server_linux.go:169] "Using iptables Proxier"
	I1105 18:06:52.317861       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 18:06:52.318212       1 server.go:483] "Version info" version="v1.31.2"
	I1105 18:06:52.318370       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 18:06:52.319452       1 config.go:199] "Starting service config controller"
	I1105 18:06:52.319523       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 18:06:52.319574       1 config.go:105] "Starting endpoint slice config controller"
	I1105 18:06:52.319601       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 18:06:52.331883       1 config.go:328] "Starting node config controller"
	I1105 18:06:52.332865       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 18:06:52.420363       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 18:06:52.420473       1 shared_informer.go:320] Caches are synced for service config
	I1105 18:06:52.433224       1 shared_informer.go:320] Caches are synced for node config
	W1105 18:08:06.005800       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-256890&resourceVersion=1629": http2: client connection lost
	W1105 18:08:06.007175       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1849": http2: client connection lost
	E1105 18:08:06.008143       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1849\": http2: client connection lost" logger="UnhandledError"
	W1105 18:08:06.007213       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1777": http2: client connection lost
	E1105 18:08:06.008274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1777\": http2: client connection lost" logger="UnhandledError"
	E1105 18:08:06.008326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-256890&resourceVersion=1629\": http2: client connection lost" logger="UnhandledError"
	
	
	==> kube-scheduler [092b391ffd45738edc18e4243b7017030f14e9718bb19a4cbacaad8e9d02dcbc] <==
	E1105 18:06:37.118619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 18:06:37.674378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1105 18:06:37.674428       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 18:06:38.026360       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1105 18:06:38.026406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1105 18:06:49.545168       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1105 18:07:20.924315       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:41502->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1105 18:07:20.924484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:41456->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1105 18:07:20.924652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:41370->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1105 18:07:20.924803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:41446->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1105 18:07:20.925002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:41476->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1105 18:07:20.925101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:41408->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1105 18:07:20.925194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:41442->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1105 18:07:20.925288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:41426->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1105 18:07:20.925380       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:41420->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1105 18:07:20.925485       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:41378->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1105 18:07:20.925588       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:41394->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1105 18:07:20.925685       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:41448->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1105 18:07:20.925778       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:41464->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1105 18:07:20.925874       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:41486->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1105 18:07:20.941518       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:48874->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1105 18:08:38.473458       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-n5g8d\": pod busybox-7dff88458-n5g8d is already assigned to node \"ha-256890-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-n5g8d" node="ha-256890-m04"
	E1105 18:08:38.473608       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2f05a279-fca9-4dcb-b2d4-b6c4cb8675fe(default/busybox-7dff88458-n5g8d) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-n5g8d"
	E1105 18:08:38.473667       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-n5g8d\": pod busybox-7dff88458-n5g8d is already assigned to node \"ha-256890-m04\"" pod="default/busybox-7dff88458-n5g8d"
	I1105 18:08:38.473712       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-n5g8d" node="ha-256890-m04"
	
	
	==> kubelet <==
	Nov 05 18:08:06 ha-256890 kubelet[760]: E1105 18:08:06.015204     760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-256890&resourceVersion=1629\": http2: client connection lost" logger="UnhandledError"
	Nov 05 18:08:06 ha-256890 kubelet[760]: W1105 18:08:06.015210     760 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1742": http2: client connection lost
	Nov 05 18:08:06 ha-256890 kubelet[760]: E1105 18:08:06.015243     760 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1742\": http2: client connection lost" logger="UnhandledError"
	Nov 05 18:08:06 ha-256890 kubelet[760]: W1105 18:08:06.015251     760 reflector.go:561] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-256890&resourceVersion=1899": http2: client connection lost
	Nov 05 18:08:06 ha-256890 kubelet[760]: E1105 18:08:06.015283     760 reflector.go:158] "Unhandled Error" err="pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-256890&resourceVersion=1899\": http2: client connection lost" logger="UnhandledError"
	Nov 05 18:08:06 ha-256890 kubelet[760]: W1105 18:08:06.015287     760 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1742": http2: client connection lost
	Nov 05 18:08:06 ha-256890 kubelet[760]: E1105 18:08:06.015306     760 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1742\": http2: client connection lost" logger="UnhandledError"
	Nov 05 18:08:06 ha-256890 kubelet[760]: W1105 18:08:06.015350     760 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1742": http2: client connection lost
	Nov 05 18:08:06 ha-256890 kubelet[760]: E1105 18:08:06.015374     760 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1742\": http2: client connection lost" logger="UnhandledError"
	Nov 05 18:08:06 ha-256890 kubelet[760]: E1105 18:08:06.015324     760 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-256890.180524b04e08d8db\": http2: client connection lost" event="&Event{ObjectMeta:{kube-apiserver-ha-256890.180524b04e08d8db  kube-system   1656 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-256890,UID:965d36b0bd01f8db0db496fb9c277696,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.31.2\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-256890,},FirstTimestamp:2024-11-05 18:06:11 +0000 UTC,LastTimestamp:2024-11-05 18:07:18.102369386 +0000 UTC m=+73.403511621,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Act
ion:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-256890,}"
	Nov 05 18:08:06 ha-256890 kubelet[760]: I1105 18:08:06.015424     760 status_manager.go:851] "Failed to get status for pod" podUID="f03aafba1fd5ea3d6573aa24d746d35f" pod="kube-system/kube-vip-ha-256890" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-256890\": http2: client connection lost"
	Nov 05 18:08:06 ha-256890 kubelet[760]: W1105 18:08:06.015454     760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1776": http2: client connection lost
	Nov 05 18:08:06 ha-256890 kubelet[760]: E1105 18:08:06.015481     760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1776\": http2: client connection lost" logger="UnhandledError"
	Nov 05 18:08:06 ha-256890 kubelet[760]: W1105 18:08:06.015521     760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1858": http2: client connection lost
	Nov 05 18:08:06 ha-256890 kubelet[760]: E1105 18:08:06.015547     760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1858\": http2: client connection lost" logger="UnhandledError"
	Nov 05 18:08:06 ha-256890 kubelet[760]: E1105 18:08:06.015588     760 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-256890?timeout=10s\": http2: client connection lost"
	Nov 05 18:08:06 ha-256890 kubelet[760]: I1105 18:08:06.015607     760 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Nov 05 18:08:14 ha-256890 kubelet[760]: E1105 18:08:14.917923     760 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830094917495724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157324,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:08:14 ha-256890 kubelet[760]: E1105 18:08:14.917958     760 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830094917495724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157324,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:08:24 ha-256890 kubelet[760]: E1105 18:08:24.920853     760 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830104919722627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157324,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:08:24 ha-256890 kubelet[760]: E1105 18:08:24.921475     760 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830104919722627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157324,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:08:34 ha-256890 kubelet[760]: E1105 18:08:34.922839     760 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830114922536023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157324,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:08:34 ha-256890 kubelet[760]: E1105 18:08:34.922879     760 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830114922536023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157324,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:08:44 ha-256890 kubelet[760]: E1105 18:08:44.923907     760 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830124923716997,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157324,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 18:08:44 ha-256890 kubelet[760]: E1105 18:08:44.923944     760 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730830124923716997,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157324,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-256890 -n ha-256890
helpers_test.go:261: (dbg) Run:  kubectl --context ha-256890 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (16.67s)

                                                
                                    

Test pass (296/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.08
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.2/json-events 8.38
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.07
18 TestDownloadOnly/v1.31.2/DeleteAll 0.21
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.53
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 215.79
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 10.89
35 TestAddons/parallel/Registry 18.4
37 TestAddons/parallel/InspektorGadget 11.7
40 TestAddons/parallel/CSI 42.21
41 TestAddons/parallel/Headlamp 17.83
42 TestAddons/parallel/CloudSpanner 6.56
43 TestAddons/parallel/LocalPath 8.35
44 TestAddons/parallel/NvidiaDevicePlugin 6.49
45 TestAddons/parallel/Yakd 11.7
47 TestAddons/StoppedEnableDisable 12.18
48 TestCertOptions 36.7
49 TestCertExpiration 248.75
51 TestForceSystemdFlag 39.57
52 TestForceSystemdEnv 44.6
58 TestErrorSpam/setup 28.22
59 TestErrorSpam/start 0.74
60 TestErrorSpam/status 1.06
61 TestErrorSpam/pause 1.78
62 TestErrorSpam/unpause 1.74
63 TestErrorSpam/stop 1.49
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 49.44
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 56.42
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.44
75 TestFunctional/serial/CacheCmd/cache/add_local 1.39
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.08
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 33.86
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.72
86 TestFunctional/serial/LogsFileCmd 1.73
87 TestFunctional/serial/InvalidService 4.63
89 TestFunctional/parallel/ConfigCmd 0.54
90 TestFunctional/parallel/DashboardCmd 8.38
91 TestFunctional/parallel/DryRun 0.58
92 TestFunctional/parallel/InternationalLanguage 0.29
93 TestFunctional/parallel/StatusCmd 0.96
97 TestFunctional/parallel/ServiceCmdConnect 6.63
98 TestFunctional/parallel/AddonsCmd 0.23
99 TestFunctional/parallel/PersistentVolumeClaim 26.84
101 TestFunctional/parallel/SSHCmd 0.5
102 TestFunctional/parallel/CpCmd 1.91
104 TestFunctional/parallel/FileSync 0.32
105 TestFunctional/parallel/CertSync 2.12
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
113 TestFunctional/parallel/License 0.28
114 TestFunctional/parallel/Version/short 0.08
115 TestFunctional/parallel/Version/components 1.3
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.41
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
120 TestFunctional/parallel/ImageCommands/ImageBuild 5.49
121 TestFunctional/parallel/ImageCommands/Setup 0.65
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.26
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.56
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.12
127 TestFunctional/parallel/ServiceCmd/DeployApp 10.29
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.28
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.42
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.78
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.36
138 TestFunctional/parallel/ServiceCmd/List 0.42
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.4
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
141 TestFunctional/parallel/ServiceCmd/Format 0.4
142 TestFunctional/parallel/ServiceCmd/URL 0.37
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
150 TestFunctional/parallel/ProfileCmd/profile_list 0.39
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
152 TestFunctional/parallel/MountCmd/any-port 8.88
153 TestFunctional/parallel/MountCmd/specific-port 1.98
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.88
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 172.83
162 TestMultiControlPlane/serial/DeployApp 8.77
163 TestMultiControlPlane/serial/PingHostFromPods 1.53
164 TestMultiControlPlane/serial/AddWorkerNode 33.41
165 TestMultiControlPlane/serial/NodeLabels 0.1
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.94
167 TestMultiControlPlane/serial/CopyFile 17.8
168 TestMultiControlPlane/serial/StopSecondaryNode 12.67
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
170 TestMultiControlPlane/serial/RestartSecondaryNode 25.96
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.29
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 196.85
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.75
175 TestMultiControlPlane/serial/StopCluster 35.77
176 TestMultiControlPlane/serial/RestartCluster 62.37
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
178 TestMultiControlPlane/serial/AddSecondaryNode 74.92
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
183 TestJSONOutput/start/Command 46.86
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.74
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.65
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.9
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.23
208 TestKicCustomNetwork/create_custom_network 38.13
209 TestKicCustomNetwork/use_default_bridge_network 30.32
210 TestKicExistingNetwork 34.64
211 TestKicCustomSubnet 32.64
212 TestKicStaticIP 30.91
213 TestMainNoArgs 0.06
214 TestMinikubeProfile 66.21
217 TestMountStart/serial/StartWithMountFirst 6.33
218 TestMountStart/serial/VerifyMountFirst 0.26
219 TestMountStart/serial/StartWithMountSecond 10.38
220 TestMountStart/serial/VerifyMountSecond 0.24
221 TestMountStart/serial/DeleteFirst 1.61
222 TestMountStart/serial/VerifyMountPostDelete 0.25
223 TestMountStart/serial/Stop 1.2
224 TestMountStart/serial/RestartStopped 8.16
225 TestMountStart/serial/VerifyMountPostStop 0.26
228 TestMultiNode/serial/FreshStart2Nodes 81.42
229 TestMultiNode/serial/DeployApp2Nodes 7.41
230 TestMultiNode/serial/PingHostFrom2Pods 0.99
231 TestMultiNode/serial/AddNode 29.52
232 TestMultiNode/serial/MultiNodeLabels 0.09
233 TestMultiNode/serial/ProfileList 0.64
234 TestMultiNode/serial/CopyFile 9.47
235 TestMultiNode/serial/StopNode 2.17
236 TestMultiNode/serial/StartAfterStop 9.81
237 TestMultiNode/serial/RestartKeepsNodes 94
238 TestMultiNode/serial/DeleteNode 5.35
239 TestMultiNode/serial/StopMultiNode 23.88
240 TestMultiNode/serial/RestartMultiNode 55.82
241 TestMultiNode/serial/ValidateNameConflict 31.19
246 TestPreload 129.65
248 TestScheduledStopUnix 105.6
251 TestInsufficientStorage 10.44
252 TestRunningBinaryUpgrade 81.8
254 TestKubernetesUpgrade 398.28
255 TestMissingContainerUpgrade 166.63
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
258 TestNoKubernetes/serial/StartWithK8s 36.52
259 TestNoKubernetes/serial/StartWithStopK8s 9.67
260 TestNoKubernetes/serial/Start 8.02
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.43
262 TestNoKubernetes/serial/ProfileList 1.55
263 TestNoKubernetes/serial/Stop 1.25
264 TestNoKubernetes/serial/StartNoArgs 8.2
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
266 TestStoppedBinaryUpgrade/Setup 1.13
267 TestStoppedBinaryUpgrade/Upgrade 83.56
268 TestStoppedBinaryUpgrade/MinikubeLogs 0.98
277 TestPause/serial/Start 53.48
278 TestPause/serial/SecondStartNoReconfiguration 20.58
279 TestPause/serial/Pause 1.16
280 TestPause/serial/VerifyStatus 0.49
281 TestPause/serial/Unpause 0.96
282 TestPause/serial/PauseAgain 1.36
283 TestPause/serial/DeletePaused 3.36
284 TestPause/serial/VerifyDeletedResources 1.02
292 TestNetworkPlugins/group/false 4.66
297 TestStartStop/group/old-k8s-version/serial/FirstStart 153.55
298 TestStartStop/group/old-k8s-version/serial/DeployApp 10.93
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.13
300 TestStartStop/group/old-k8s-version/serial/Stop 12.35
302 TestStartStop/group/no-preload/serial/FirstStart 68.97
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
304 TestStartStop/group/old-k8s-version/serial/SecondStart 35.11
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 26
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
307 TestStartStop/group/no-preload/serial/DeployApp 10.39
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
309 TestStartStop/group/old-k8s-version/serial/Pause 3
311 TestStartStop/group/embed-certs/serial/FirstStart 53.29
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.33
313 TestStartStop/group/no-preload/serial/Stop 12.13
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
315 TestStartStop/group/no-preload/serial/SecondStart 279.66
316 TestStartStop/group/embed-certs/serial/DeployApp 10.32
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
318 TestStartStop/group/embed-certs/serial/Stop 12.01
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
320 TestStartStop/group/embed-certs/serial/SecondStart 302.51
321 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
322 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
323 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
324 TestStartStop/group/no-preload/serial/Pause 3.07
326 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.18
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.35
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.97
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
333 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 298.39
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
335 TestStartStop/group/embed-certs/serial/Pause 3.83
337 TestStartStop/group/newest-cni/serial/FirstStart 40.41
338 TestStartStop/group/newest-cni/serial/DeployApp 0
339 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.63
340 TestStartStop/group/newest-cni/serial/Stop 2
341 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
342 TestStartStop/group/newest-cni/serial/SecondStart 16.37
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
346 TestStartStop/group/newest-cni/serial/Pause 2.93
347 TestNetworkPlugins/group/auto/Start 53.56
348 TestNetworkPlugins/group/auto/KubeletFlags 0.32
349 TestNetworkPlugins/group/auto/NetCatPod 10.29
350 TestNetworkPlugins/group/auto/DNS 0.17
351 TestNetworkPlugins/group/auto/Localhost 0.15
352 TestNetworkPlugins/group/auto/HairPin 0.16
353 TestNetworkPlugins/group/flannel/Start 54.75
354 TestNetworkPlugins/group/flannel/ControllerPod 6.01
355 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
356 TestNetworkPlugins/group/flannel/NetCatPod 11.25
357 TestNetworkPlugins/group/flannel/DNS 0.17
358 TestNetworkPlugins/group/flannel/Localhost 0.16
359 TestNetworkPlugins/group/flannel/HairPin 0.14
360 TestNetworkPlugins/group/calico/Start 62.25
361 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
362 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
363 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
364 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.71
365 TestNetworkPlugins/group/custom-flannel/Start 59.58
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/calico/KubeletFlags 0.36
368 TestNetworkPlugins/group/calico/NetCatPod 11.32
369 TestNetworkPlugins/group/calico/DNS 0.24
370 TestNetworkPlugins/group/calico/Localhost 0.21
371 TestNetworkPlugins/group/calico/HairPin 0.21
372 TestNetworkPlugins/group/kindnet/Start 50.4
373 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
374 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.33
375 TestNetworkPlugins/group/custom-flannel/DNS 0.24
376 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
377 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
378 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
379 TestNetworkPlugins/group/bridge/Start 48.83
380 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
381 TestNetworkPlugins/group/kindnet/NetCatPod 12.52
382 TestNetworkPlugins/group/kindnet/DNS 0.24
383 TestNetworkPlugins/group/kindnet/Localhost 0.2
384 TestNetworkPlugins/group/kindnet/HairPin 0.18
385 TestNetworkPlugins/group/enable-default-cni/Start 74.69
386 TestNetworkPlugins/group/bridge/KubeletFlags 0.43
387 TestNetworkPlugins/group/bridge/NetCatPod 11.42
388 TestNetworkPlugins/group/bridge/DNS 0.22
389 TestNetworkPlugins/group/bridge/Localhost 0.16
390 TestNetworkPlugins/group/bridge/HairPin 0.18
391 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
392 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
393 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
394 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
395 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (8.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-931410 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-931410 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.078276844s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1105 17:46:31.113767  285188 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1105 17:46:31.113845  285188 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-279806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-931410
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-931410: exit status 85 (88.632367ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-931410 | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC |          |
	|         | -p download-only-931410        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 17:46:23
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 17:46:23.084197  285194 out.go:345] Setting OutFile to fd 1 ...
	I1105 17:46:23.084368  285194 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:46:23.084398  285194 out.go:358] Setting ErrFile to fd 2...
	I1105 17:46:23.084418  285194 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:46:23.084708  285194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-279806/.minikube/bin
	W1105 17:46:23.084864  285194 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19910-279806/.minikube/config/config.json: open /home/jenkins/minikube-integration/19910-279806/.minikube/config/config.json: no such file or directory
	I1105 17:46:23.085277  285194 out.go:352] Setting JSON to true
	I1105 17:46:23.086111  285194 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5326,"bootTime":1730823457,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1105 17:46:23.086200  285194 start.go:139] virtualization:  
	I1105 17:46:23.089810  285194 out.go:97] [download-only-931410] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1105 17:46:23.089989  285194 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19910-279806/.minikube/cache/preloaded-tarball: no such file or directory
	I1105 17:46:23.090030  285194 notify.go:220] Checking for updates...
	I1105 17:46:23.092580  285194 out.go:169] MINIKUBE_LOCATION=19910
	I1105 17:46:23.095205  285194 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 17:46:23.097871  285194 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19910-279806/kubeconfig
	I1105 17:46:23.100462  285194 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-279806/.minikube
	I1105 17:46:23.103056  285194 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1105 17:46:23.108044  285194 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1105 17:46:23.108360  285194 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 17:46:23.129205  285194 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1105 17:46:23.129323  285194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 17:46:23.201025  285194 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-11-05 17:46:23.19197789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1105 17:46:23.201129  285194 docker.go:318] overlay module found
	I1105 17:46:23.203698  285194 out.go:97] Using the docker driver based on user configuration
	I1105 17:46:23.203720  285194 start.go:297] selected driver: docker
	I1105 17:46:23.203726  285194 start.go:901] validating driver "docker" against <nil>
	I1105 17:46:23.203840  285194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 17:46:23.255052  285194 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-11-05 17:46:23.246407272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1105 17:46:23.255260  285194 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 17:46:23.255530  285194 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1105 17:46:23.255692  285194 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1105 17:46:23.258439  285194 out.go:169] Using Docker driver with root privileges
	I1105 17:46:23.260992  285194 cni.go:84] Creating CNI manager for ""
	I1105 17:46:23.261059  285194 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1105 17:46:23.261072  285194 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1105 17:46:23.261160  285194 start.go:340] cluster config:
	{Name:download-only-931410 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-931410 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 17:46:23.263876  285194 out.go:97] Starting "download-only-931410" primary control-plane node in "download-only-931410" cluster
	I1105 17:46:23.263905  285194 cache.go:121] Beginning downloading kic base image for docker with crio
	I1105 17:46:23.266652  285194 out.go:97] Pulling base image v0.0.45-1730282848-19883 ...
	I1105 17:46:23.266685  285194 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 17:46:23.266854  285194 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local docker daemon
	I1105 17:46:23.281453  285194 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 to local cache
	I1105 17:46:23.282094  285194 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local cache directory
	I1105 17:46:23.282195  285194 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 to local cache
	I1105 17:46:23.343650  285194 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1105 17:46:23.343676  285194 cache.go:56] Caching tarball of preloaded images
	I1105 17:46:23.344268  285194 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 17:46:23.347236  285194 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1105 17:46:23.347258  285194 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1105 17:46:23.432171  285194 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19910-279806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-931410 host does not exist
	  To start a cluster, run: "minikube start -p download-only-931410"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-931410
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (8.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-080457 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-080457 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.375226737s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (8.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1105 17:46:39.908660  285188 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1105 17:46:39.908696  285188 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-279806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-080457
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-080457: exit status 85 (69.632104ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-931410 | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC |                     |
	|         | -p download-only-931410        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC | 05 Nov 24 17:46 UTC |
	| delete  | -p download-only-931410        | download-only-931410 | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC | 05 Nov 24 17:46 UTC |
	| start   | -o=json --download-only        | download-only-080457 | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC |                     |
	|         | -p download-only-080457        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 17:46:31
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.2 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 17:46:31.578952  285396 out.go:345] Setting OutFile to fd 1 ...
	I1105 17:46:31.579116  285396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:46:31.579142  285396 out.go:358] Setting ErrFile to fd 2...
	I1105 17:46:31.579158  285396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:46:31.579707  285396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-279806/.minikube/bin
	I1105 17:46:31.580380  285396 out.go:352] Setting JSON to true
	I1105 17:46:31.581281  285396 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5335,"bootTime":1730823457,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1105 17:46:31.581457  285396 start.go:139] virtualization:  
	I1105 17:46:31.584074  285396 out.go:97] [download-only-080457] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1105 17:46:31.584451  285396 notify.go:220] Checking for updates...
	I1105 17:46:31.586274  285396 out.go:169] MINIKUBE_LOCATION=19910
	I1105 17:46:31.587568  285396 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 17:46:31.588729  285396 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19910-279806/kubeconfig
	I1105 17:46:31.590395  285396 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-279806/.minikube
	I1105 17:46:31.591866  285396 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1105 17:46:31.595215  285396 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1105 17:46:31.595452  285396 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 17:46:31.617094  285396 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1105 17:46:31.617205  285396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 17:46:31.676564  285396 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-11-05 17:46:31.666555141 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1105 17:46:31.676762  285396 docker.go:318] overlay module found
	I1105 17:46:31.678664  285396 out.go:97] Using the docker driver based on user configuration
	I1105 17:46:31.678692  285396 start.go:297] selected driver: docker
	I1105 17:46:31.678716  285396 start.go:901] validating driver "docker" against <nil>
	I1105 17:46:31.678865  285396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 17:46:31.729960  285396 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-11-05 17:46:31.72130504 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1105 17:46:31.730165  285396 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 17:46:31.730488  285396 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1105 17:46:31.730642  285396 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1105 17:46:31.733416  285396 out.go:169] Using Docker driver with root privileges
	I1105 17:46:31.735431  285396 cni.go:84] Creating CNI manager for ""
	I1105 17:46:31.735498  285396 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1105 17:46:31.735512  285396 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1105 17:46:31.735591  285396 start.go:340] cluster config:
	{Name:download-only-080457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-080457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 17:46:31.737697  285396 out.go:97] Starting "download-only-080457" primary control-plane node in "download-only-080457" cluster
	I1105 17:46:31.737717  285396 cache.go:121] Beginning downloading kic base image for docker with crio
	I1105 17:46:31.739729  285396 out.go:97] Pulling base image v0.0.45-1730282848-19883 ...
	I1105 17:46:31.739760  285396 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 17:46:31.739863  285396 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local docker daemon
	I1105 17:46:31.754489  285396 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 to local cache
	I1105 17:46:31.754648  285396 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local cache directory
	I1105 17:46:31.754667  285396 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local cache directory, skipping pull
	I1105 17:46:31.754672  285396 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 exists in cache, skipping pull
	I1105 17:46:31.754680  285396 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 as a tarball
	I1105 17:46:31.834364  285396 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
	I1105 17:46:31.834406  285396 cache.go:56] Caching tarball of preloaded images
	I1105 17:46:31.835126  285396 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 17:46:31.837678  285396 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1105 17:46:31.837699  285396 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 ...
	I1105 17:46:31.926548  285396 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:810fe254d498dda367f4e14b5cba638f -> /home/jenkins/minikube-integration/19910-279806/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-080457 host does not exist
	  To start a cluster, run: "minikube start -p download-only-080457"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-080457
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.53s)

                                                
                                                
=== RUN   TestBinaryMirror
I1105 17:46:41.112399  285188 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-032774 --alsologtostderr --binary-mirror http://127.0.0.1:34655 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-032774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-032774
--- PASS: TestBinaryMirror (0.53s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-638421
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-638421: exit status 85 (73.782655ms)

                                                
                                                
-- stdout --
	* Profile "addons-638421" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-638421"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-638421
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-638421: exit status 85 (71.487019ms)

                                                
                                                
-- stdout --
	* Profile "addons-638421" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-638421"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (215.79s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-638421 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-638421 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m35.783634439s)
--- PASS: TestAddons/Setup (215.79s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-638421 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-638421 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.89s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-638421 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-638421 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9f8f79c6-6445-4f78-b069-a69d7f39fb6f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9f8f79c6-6445-4f78-b069-a69d7f39fb6f] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003654686s
addons_test.go:633: (dbg) Run:  kubectl --context addons-638421 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-638421 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-638421 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-638421 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.89s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 7.240434ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-xl46f" [fc8d5d2f-faa3-4f66-b3c1-dac5435a86e5] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003573379s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2jjl8" [c9892084-3bb9-41d8-b4e5-856524765e94] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.00457044s
addons_test.go:331: (dbg) Run:  kubectl --context addons-638421 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-638421 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-638421 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.304538966s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-638421 ip
2024/11/05 17:50:54 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-638421 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.40s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.7s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hf5z6" [cddcb1b4-5d66-43ec-9db8-636199bdf3d1] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004012411s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-638421 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-638421 addons disable inspektor-gadget --alsologtostderr -v=1: (5.695100972s)
--- PASS: TestAddons/parallel/InspektorGadget (11.70s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1105 17:50:55.160223  285188 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1105 17:50:55.170747  285188 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1105 17:50:55.171082  285188 kapi.go:107] duration metric: took 13.237383ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 13.405727ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-638421 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-638421 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-638421 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-638421 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3b279376-a4c3-4631-9b21-84ca269d3002] Pending
helpers_test.go:344: "task-pv-pod" [3b279376-a4c3-4631-9b21-84ca269d3002] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3b279376-a4c3-4631-9b21-84ca269d3002] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.005077055s
addons_test.go:511: (dbg) Run:  kubectl --context addons-638421 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-638421 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-638421 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-638421 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-638421 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-638421 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-638421 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-638421 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-638421 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-638421 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-638421 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-638421 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-638421 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-638421 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-638421 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-638421 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-638421 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-638421 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c66a1e1f-94c9-4303-bced-261e5b706cd4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c66a1e1f-94c9-4303-bced-261e5b706cd4] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003962342s
addons_test.go:553: (dbg) Run:  kubectl --context addons-638421 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-638421 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-638421 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-638421 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-638421 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-638421 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.826947903s)
--- PASS: TestAddons/parallel/CSI (42.21s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-638421 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-5bkbz" [e68a59f0-2905-4790-bf13-36204802fbac] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-5bkbz" [e68a59f0-2905-4790-bf13-36204802fbac] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-5bkbz" [e68a59f0-2905-4790-bf13-36204802fbac] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003685572s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-638421 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-638421 addons disable headlamp --alsologtostderr -v=1: (5.883143765s)
--- PASS: TestAddons/parallel/Headlamp (17.83s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-hdz77" [f3640f53-16bd-4dba-bbdc-f9ea46052384] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.002965011s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-638421 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.35s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-638421 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-638421 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-638421 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-638421 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-638421 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-638421 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-638421 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c0582f93-2239-4de5-992c-224f526dde10] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c0582f93-2239-4de5-992c-224f526dde10] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c0582f93-2239-4de5-992c-224f526dde10] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003423332s
addons_test.go:906: (dbg) Run:  kubectl --context addons-638421 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-638421 ssh "cat /opt/local-path-provisioner/pvc-b3573bff-9dda-4c36-88d8-bc4018837214_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-638421 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-638421 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-638421 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.35s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-sms7j" [618e6ceb-8422-465e-9951-05b2b10ce4b0] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004268907s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-638421 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-xsl7x" [e30e7c1d-a0c3-4408-a818-f1b7f99e8d76] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00321706s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-638421 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-638421 addons disable yakd --alsologtostderr -v=1: (5.697472585s)
--- PASS: TestAddons/parallel/Yakd (11.70s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-638421
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-638421: (11.892079852s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-638421
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-638421
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-638421
--- PASS: TestAddons/StoppedEnableDisable (12.18s)

                                                
                                    
x
+
TestCertOptions (36.7s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-854738 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-854738 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.02753401s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-854738 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-854738 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-854738 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-854738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-854738
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-854738: (2.036301199s)
--- PASS: TestCertOptions (36.70s)

                                                
                                    
x
+
TestCertExpiration (248.75s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-795948 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1105 18:35:18.310637  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-795948 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.020120721s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-795948 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-795948 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (26.431515544s)
helpers_test.go:175: Cleaning up "cert-expiration-795948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-795948
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-795948: (2.294043817s)
--- PASS: TestCertExpiration (248.75s)

                                                
                                    
x
+
TestForceSystemdFlag (39.57s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-965299 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-965299 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.508912803s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-965299 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-965299" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-965299
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-965299: (2.653513354s)
--- PASS: TestForceSystemdFlag (39.57s)

                                                
                                    
x
+
TestForceSystemdEnv (44.6s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-585405 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-585405 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.044783982s)
helpers_test.go:175: Cleaning up "force-systemd-env-585405" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-585405
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-585405: (2.552226593s)
--- PASS: TestForceSystemdEnv (44.60s)

                                                
                                    
x
+
TestErrorSpam/setup (28.22s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-474339 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-474339 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-474339 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-474339 --driver=docker  --container-runtime=crio: (28.219506729s)
--- PASS: TestErrorSpam/setup (28.22s)

                                                
                                    
x
+
TestErrorSpam/start (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-474339 --log_dir /tmp/nospam-474339 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-474339 --log_dir /tmp/nospam-474339 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-474339 --log_dir /tmp/nospam-474339 start --dry-run
--- PASS: TestErrorSpam/start (0.74s)

                                                
                                    
x
+
TestErrorSpam/status (1.06s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-474339 --log_dir /tmp/nospam-474339 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-474339 --log_dir /tmp/nospam-474339 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-474339 --log_dir /tmp/nospam-474339 status
--- PASS: TestErrorSpam/status (1.06s)

                                                
                                    
x
+
TestErrorSpam/pause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-474339 --log_dir /tmp/nospam-474339 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-474339 --log_dir /tmp/nospam-474339 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-474339 --log_dir /tmp/nospam-474339 pause
--- PASS: TestErrorSpam/pause (1.78s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-474339 --log_dir /tmp/nospam-474339 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-474339 --log_dir /tmp/nospam-474339 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-474339 --log_dir /tmp/nospam-474339 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-474339 --log_dir /tmp/nospam-474339 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-474339 --log_dir /tmp/nospam-474339 stop: (1.297682942s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-474339 --log_dir /tmp/nospam-474339 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-474339 --log_dir /tmp/nospam-474339 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19910-279806/.minikube/files/etc/test/nested/copy/285188/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.44s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-762187 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-762187 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (49.437237876s)
--- PASS: TestFunctional/serial/StartWithProxy (49.44s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (56.42s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1105 17:58:01.644046  285188 config.go:182] Loaded profile config "functional-762187": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-762187 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-762187 --alsologtostderr -v=8: (56.412893908s)
functional_test.go:663: soft start took 56.416002367s for "functional-762187" cluster.
I1105 17:58:58.057257  285188 config.go:182] Loaded profile config "functional-762187": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (56.42s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-762187 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-762187 cache add registry.k8s.io/pause:3.1: (1.552947424s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-762187 cache add registry.k8s.io/pause:3.3: (1.441205722s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-762187 cache add registry.k8s.io/pause:latest: (1.44993239s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-762187 /tmp/TestFunctionalserialCacheCmdcacheadd_local1550206041/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 cache add minikube-local-cache-test:functional-762187
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 cache delete minikube-local-cache-test:functional-762187
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-762187
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-762187 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (274.292901ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-762187 cache reload: (1.231993668s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 kubectl -- --context functional-762187 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-762187 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.86s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-762187 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-762187 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.856972306s)
functional_test.go:761: restart took 33.857075961s for "functional-762187" cluster.
I1105 17:59:40.775673  285188 config.go:182] Loaded profile config "functional-762187": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (33.86s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-762187 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-762187 logs: (1.718305255s)
--- PASS: TestFunctional/serial/LogsCmd (1.72s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 logs --file /tmp/TestFunctionalserialLogsFileCmd3080741135/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-762187 logs --file /tmp/TestFunctionalserialLogsFileCmd3080741135/001/logs.txt: (1.726379148s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.63s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-762187 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-762187
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-762187: exit status 115 (506.235321ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31532 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-762187 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.63s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-762187 config get cpus: exit status 14 (85.692214ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-762187 config get cpus: exit status 14 (88.342593ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-762187 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-762187 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 315504: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.38s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-762187 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-762187 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (263.887544ms)

                                                
                                                
-- stdout --
	* [functional-762187] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19910
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19910-279806/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-279806/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 18:00:32.558919  315002 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:00:32.559047  315002 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:00:32.559053  315002 out.go:358] Setting ErrFile to fd 2...
	I1105 18:00:32.559058  315002 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:00:32.559296  315002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-279806/.minikube/bin
	I1105 18:00:32.560713  315002 out.go:352] Setting JSON to false
	I1105 18:00:32.561765  315002 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6176,"bootTime":1730823457,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1105 18:00:32.561847  315002 start.go:139] virtualization:  
	I1105 18:00:32.565870  315002 out.go:177] * [functional-762187] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1105 18:00:32.568874  315002 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:00:32.569171  315002 notify.go:220] Checking for updates...
	I1105 18:00:32.575586  315002 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:00:32.578221  315002 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-279806/kubeconfig
	I1105 18:00:32.580925  315002 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-279806/.minikube
	I1105 18:00:32.583560  315002 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1105 18:00:32.586387  315002 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:00:32.589452  315002 config.go:182] Loaded profile config "functional-762187": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:00:32.590170  315002 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:00:32.638702  315002 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1105 18:00:32.638884  315002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 18:00:32.732138  315002 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-11-05 18:00:32.706680967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1105 18:00:32.732250  315002 docker.go:318] overlay module found
	I1105 18:00:32.735912  315002 out.go:177] * Using the docker driver based on existing profile
	I1105 18:00:32.738493  315002 start.go:297] selected driver: docker
	I1105 18:00:32.738513  315002 start.go:901] validating driver "docker" against &{Name:functional-762187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-762187 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:00:32.738625  315002 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:00:32.741876  315002 out.go:201] 
	W1105 18:00:32.744537  315002 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1105 18:00:32.747396  315002 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-762187 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-762187 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-762187 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (286.039057ms)

                                                
                                                
-- stdout --
	* [functional-762187] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19910
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19910-279806/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-279806/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 18:00:32.813032  315066 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:00:32.813266  315066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:00:32.813292  315066 out.go:358] Setting ErrFile to fd 2...
	I1105 18:00:32.813311  315066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:00:32.814149  315066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-279806/.minikube/bin
	I1105 18:00:32.815678  315066 out.go:352] Setting JSON to false
	I1105 18:00:32.816818  315066 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":6176,"bootTime":1730823457,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1105 18:00:32.816908  315066 start.go:139] virtualization:  
	I1105 18:00:32.819934  315066 out.go:177] * [functional-762187] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1105 18:00:32.824060  315066 notify.go:220] Checking for updates...
	I1105 18:00:32.826636  315066 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:00:32.830017  315066 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:00:32.833823  315066 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-279806/kubeconfig
	I1105 18:00:32.836230  315066 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-279806/.minikube
	I1105 18:00:32.838513  315066 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1105 18:00:32.840970  315066 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:00:32.844080  315066 config.go:182] Loaded profile config "functional-762187": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:00:32.844699  315066 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:00:32.884067  315066 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1105 18:00:32.884201  315066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 18:00:32.985826  315066 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-11-05 18:00:32.97255215 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1105 18:00:32.985941  315066 docker.go:318] overlay module found
	I1105 18:00:32.990988  315066 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1105 18:00:32.993853  315066 start.go:297] selected driver: docker
	I1105 18:00:32.993873  315066 start.go:901] validating driver "docker" against &{Name:functional-762187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-762187 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 18:00:32.993982  315066 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:00:32.997556  315066 out.go:201] 
	W1105 18:00:33.000638  315066 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1105 18:00:33.003438  315066 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
E1105 18:00:18.314154  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:00:18.322317  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:00:18.334162  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:00:18.356438  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:00:18.398663  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 status -o json
E1105 18:00:18.481505  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:00:18.642819  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-762187 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-762187 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-gvdnl" [7c57630e-8fdb-4824-a7f0-ea0ff36b9a4b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-gvdnl" [7c57630e-8fdb-4824-a7f0-ea0ff36b9a4b] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.003945527s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31449
functional_test.go:1675: http://192.168.49.2:31449: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-gvdnl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31449
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [08bc5dbe-4e53-4727-9403-a3170ce554a2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003088498s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-762187 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-762187 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-762187 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-762187 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bbccbaae-fbce-492b-a716-1fe89c9e2188] Pending
helpers_test.go:344: "sp-pod" [bbccbaae-fbce-492b-a716-1fe89c9e2188] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bbccbaae-fbce-492b-a716-1fe89c9e2188] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.00548828s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-762187 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-762187 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-762187 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e35c258c-1a59-4f28-b6d3-62c6420c8b9c] Pending
helpers_test.go:344: "sp-pod" [e35c258c-1a59-4f28-b6d3-62c6420c8b9c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1105 18:00:23.448946  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [e35c258c-1a59-4f28-b6d3-62c6420c8b9c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003942828s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-762187 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.84s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh -n functional-762187 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 cp functional-762187:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2386355128/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh -n functional-762187 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh -n functional-762187 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/285188/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "sudo cat /etc/test/nested/copy/285188/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/285188.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "sudo cat /etc/ssl/certs/285188.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/285188.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "sudo cat /usr/share/ca-certificates/285188.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2851882.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "sudo cat /etc/ssl/certs/2851882.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2851882.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "sudo cat /usr/share/ca-certificates/2851882.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-762187 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-762187 ssh "sudo systemctl is-active docker": exit status 1 (337.056793ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-762187 ssh "sudo systemctl is-active containerd": exit status 1 (327.139094ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-762187 version -o=json --components: (1.299264933s)
--- PASS: TestFunctional/parallel/Version/components (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-762187 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-762187
localhost/kicbase/echo-server:functional-762187
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20241023-a345ebe4
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-762187 image ls --format short --alsologtostderr:
I1105 18:00:35.203162  315544 out.go:345] Setting OutFile to fd 1 ...
I1105 18:00:35.203382  315544 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 18:00:35.203408  315544 out.go:358] Setting ErrFile to fd 2...
I1105 18:00:35.203428  315544 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 18:00:35.203717  315544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-279806/.minikube/bin
I1105 18:00:35.204629  315544 config.go:182] Loaded profile config "functional-762187": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1105 18:00:35.204851  315544 config.go:182] Loaded profile config "functional-762187": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1105 18:00:35.205519  315544 cli_runner.go:164] Run: docker container inspect functional-762187 --format={{.State.Status}}
I1105 18:00:35.225596  315544 ssh_runner.go:195] Run: systemctl --version
I1105 18:00:35.226109  315544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-762187
I1105 18:00:35.248312  315544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/functional-762187/id_rsa Username:docker}
I1105 18:00:35.338446  315544 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-762187 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/my-image                      | functional-762187  | d6129c2b128d1 | 1.64MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | f9c26480f1e72 | 92.6MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 0bcd66b03df5f | 98.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/kindest/kindnetd              | v20241023-a345ebe4 | 55b97e1cbb2a3 | 98.3MB |
| localhost/kicbase/echo-server           | functional-762187  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| docker.io/library/nginx                 | alpine             | 577a23b5858b9 | 52.3MB |
| localhost/minikube-local-cache-test     | functional-762187  | 8e6920c2d3045 | 3.33kB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | d6b061e73ae45 | 67MB   |
| docker.io/library/nginx                 | latest             | 4b196525bd3cc | 201MB  |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 9404aea098d9e | 87MB   |
| registry.k8s.io/kube-proxy              | v1.31.2            | 021d242013305 | 96MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-762187 image ls --format table --alsologtostderr:
I1105 18:00:41.464964  315979 out.go:345] Setting OutFile to fd 1 ...
I1105 18:00:41.465417  315979 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 18:00:41.465425  315979 out.go:358] Setting ErrFile to fd 2...
I1105 18:00:41.465430  315979 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 18:00:41.465678  315979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-279806/.minikube/bin
I1105 18:00:41.467372  315979 config.go:182] Loaded profile config "functional-762187": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1105 18:00:41.467522  315979 config.go:182] Loaded profile config "functional-762187": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1105 18:00:41.468005  315979 cli_runner.go:164] Run: docker container inspect functional-762187 --format={{.State.Status}}
I1105 18:00:41.491911  315979 ssh_runner.go:195] Run: systemctl --version
I1105 18:00:41.491964  315979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-762187
I1105 18:00:41.510396  315979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/functional-762187/id_rsa Username:docker}
I1105 18:00:41.633561  315979 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 image ls --format json --alsologtostderr
2024/11/05 18:00:41 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-762187 image ls --format json --alsologtostderr:
[{"id":"021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba","repoDigests":["registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe","registry.k8s.io/kube-proxy@sha256:adabb2ce69fab82e04b441902489c8dd06f47122f00bc1062189f3cf477c795a"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"95952789"},{"id":"d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:38def311c8c2668b4b3820de83cd518e0d1c32cda10e661163f957a87f92ca34"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"67007814"},{"id":"577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":["docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250","docker.io/library/nginx@sha256:d1f949a77b81762af560a6e8f3f2bc2817f1c575ede5a756749e3c5d459e6478"
],"repoTags":["docker.io/library/nginx:alpine"],"size":"52254450"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-762187"],"size":"4788229"},{"id":"9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752","registry.k8s.io/kube-controller-manager@sha256:b8d51076af39954cadc718ae40bd8a736ae5ad4
e0654465ae91886cad3a9b602"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"86996294"},{"id":"55b97e1cbb2a39e125fd41804d8dd0279b34111fe79fd4673ddc92bc97431ca2","repoDigests":["docker.io/kindest/kindnetd@sha256:96156439ac8537499e45fedf68a7cb80f0fbafd77fc4d7a5204d3151cf412450","docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16"],"repoTags":["docker.io/kindest/kindnetd:v20241023-a345ebe4"],"size":"98288690"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"4b196525bd3cc6aa7a72ba63c6c2ae6d957b57edd603a7070c5e31f8e63c51f9","repoDigests":["docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb","docker.io/library/n
ginx@sha256:3c8ba625caaaae90eced8640bd64f6bf87ad68773831837b27b35056df873aef"],"repoTags":["docker.io/library/nginx:latest"],"size":"200984107"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2
f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:b61c0e5ba940299ee811efe946ee83e509799ea7e0651e1b782e83a665b29bae"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"98291250"},{"id":"38e6baea50650ddc2b0eee9b75494aeb5496b900182d14268d3c0d2d01a4e2ab","repoDigests":["docker.io/library/a9080dae62923e1c68e791fe0743943ab1446fd3c7b78d344022930c36d46c03-tmp@sha256:94cc9d9d2309d6afc993cc02616a101865577c83ea31a3397f7f24dfe00afdfd"],"repoTags":[],"size":"1637644"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d7
95b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"8e6920c2d304526c3577b10b205cfc8aeaa488c574f782e741b19355fdab9e17","repoDigests":["localhost/minikube-local-cache-test@sha256:50a01f25ae7dba99fc5ab138cd96ca63664db90da3a9055b4e0db2fe7284deb1"],"repoTags":["localhost/minikube-local-cache-test:functional-762187"],"size":"3330"},{"id":"f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270","repoDigests":["registry.k8s.io/kube-apiserver@sha256:8e7caee5c8075d84ee5b93472bedf9cf21364da1d72d60d3de15dfa0d172ff63","registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"92632544"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506
b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"d6129c2b128d177d04580fb715eaccddc9d545d3c6b8de06a31e562e76c14c45","repoDigests":["localhost/my-image@sha256:c5adbb6545281cf831b9b0518589b5c1ab1aa33c450e28893e82c5a01cbd64c1"],"repoTags":["localhost/my-image:functional-762187"],"size":"1640226"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.
k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-762187 image ls --format json --alsologtostderr:
I1105 18:00:41.187538  315947 out.go:345] Setting OutFile to fd 1 ...
I1105 18:00:41.187741  315947 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 18:00:41.187768  315947 out.go:358] Setting ErrFile to fd 2...
I1105 18:00:41.187788  315947 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 18:00:41.188055  315947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-279806/.minikube/bin
I1105 18:00:41.188778  315947 config.go:182] Loaded profile config "functional-762187": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1105 18:00:41.188956  315947 config.go:182] Loaded profile config "functional-762187": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1105 18:00:41.189465  315947 cli_runner.go:164] Run: docker container inspect functional-762187 --format={{.State.Status}}
I1105 18:00:41.234011  315947 ssh_runner.go:195] Run: systemctl --version
I1105 18:00:41.234068  315947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-762187
I1105 18:00:41.263421  315947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/functional-762187/id_rsa Username:docker}
I1105 18:00:41.357153  315947 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-762187 image ls --format yaml --alsologtostderr:
- id: 0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:b61c0e5ba940299ee811efe946ee83e509799ea7e0651e1b782e83a665b29bae
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "98291250"
- id: 4b196525bd3cc6aa7a72ba63c6c2ae6d957b57edd603a7070c5e31f8e63c51f9
repoDigests:
- docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb
- docker.io/library/nginx@sha256:3c8ba625caaaae90eced8640bd64f6bf87ad68773831837b27b35056df873aef
repoTags:
- docker.io/library/nginx:latest
size: "200984107"
- id: f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:8e7caee5c8075d84ee5b93472bedf9cf21364da1d72d60d3de15dfa0d172ff63
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "92632544"
- id: d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:38def311c8c2668b4b3820de83cd518e0d1c32cda10e661163f957a87f92ca34
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "67007814"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-762187
size: "4788229"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: 021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba
repoDigests:
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
- registry.k8s.io/kube-proxy@sha256:adabb2ce69fab82e04b441902489c8dd06f47122f00bc1062189f3cf477c795a
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "95952789"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 55b97e1cbb2a39e125fd41804d8dd0279b34111fe79fd4673ddc92bc97431ca2
repoDigests:
- docker.io/kindest/kindnetd@sha256:96156439ac8537499e45fedf68a7cb80f0fbafd77fc4d7a5204d3151cf412450
- docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16
repoTags:
- docker.io/kindest/kindnetd:v20241023-a345ebe4
size: "98288690"
- id: 577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests:
- docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
- docker.io/library/nginx@sha256:d1f949a77b81762af560a6e8f3f2bc2817f1c575ede5a756749e3c5d459e6478
repoTags:
- docker.io/library/nginx:alpine
size: "52254450"
- id: 8e6920c2d304526c3577b10b205cfc8aeaa488c574f782e741b19355fdab9e17
repoDigests:
- localhost/minikube-local-cache-test@sha256:50a01f25ae7dba99fc5ab138cd96ca63664db90da3a9055b4e0db2fe7284deb1
repoTags:
- localhost/minikube-local-cache-test:functional-762187
size: "3330"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
- registry.k8s.io/kube-controller-manager@sha256:b8d51076af39954cadc718ae40bd8a736ae5ad4e0654465ae91886cad3a9b602
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "86996294"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-762187 image ls --format yaml --alsologtostderr:
I1105 18:00:35.439957  315574 out.go:345] Setting OutFile to fd 1 ...
I1105 18:00:35.440124  315574 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 18:00:35.440139  315574 out.go:358] Setting ErrFile to fd 2...
I1105 18:00:35.440145  315574 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 18:00:35.440418  315574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-279806/.minikube/bin
I1105 18:00:35.441117  315574 config.go:182] Loaded profile config "functional-762187": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1105 18:00:35.441277  315574 config.go:182] Loaded profile config "functional-762187": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1105 18:00:35.441817  315574 cli_runner.go:164] Run: docker container inspect functional-762187 --format={{.State.Status}}
I1105 18:00:35.458548  315574 ssh_runner.go:195] Run: systemctl --version
I1105 18:00:35.458623  315574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-762187
I1105 18:00:35.475390  315574 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/functional-762187/id_rsa Username:docker}
I1105 18:00:35.565005  315574 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-762187 ssh pgrep buildkitd: exit status 1 (287.61266ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 image build -t localhost/my-image:functional-762187 testdata/build --alsologtostderr
E1105 18:00:38.812593  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-762187 image build -t localhost/my-image:functional-762187 testdata/build --alsologtostderr: (4.94004436s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-762187 image build -t localhost/my-image:functional-762187 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 38e6baea506
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-762187
--> d6129c2b128
Successfully tagged localhost/my-image:functional-762187
d6129c2b128d177d04580fb715eaccddc9d545d3c6b8de06a31e562e76c14c45
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-762187 image build -t localhost/my-image:functional-762187 testdata/build --alsologtostderr:
I1105 18:00:35.958743  315685 out.go:345] Setting OutFile to fd 1 ...
I1105 18:00:35.959468  315685 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 18:00:35.959503  315685 out.go:358] Setting ErrFile to fd 2...
I1105 18:00:35.959525  315685 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1105 18:00:35.959802  315685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-279806/.minikube/bin
I1105 18:00:35.960519  315685 config.go:182] Loaded profile config "functional-762187": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1105 18:00:35.961279  315685 config.go:182] Loaded profile config "functional-762187": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1105 18:00:35.961772  315685 cli_runner.go:164] Run: docker container inspect functional-762187 --format={{.State.Status}}
I1105 18:00:35.982311  315685 ssh_runner.go:195] Run: systemctl --version
I1105 18:00:35.982364  315685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-762187
I1105 18:00:36.003119  315685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/functional-762187/id_rsa Username:docker}
I1105 18:00:36.092970  315685 build_images.go:161] Building image from path: /tmp/build.2105069274.tar
I1105 18:00:36.093041  315685 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1105 18:00:36.102095  315685 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2105069274.tar
I1105 18:00:36.105829  315685 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2105069274.tar: stat -c "%s %y" /var/lib/minikube/build/build.2105069274.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2105069274.tar': No such file or directory
I1105 18:00:36.105861  315685 ssh_runner.go:362] scp /tmp/build.2105069274.tar --> /var/lib/minikube/build/build.2105069274.tar (3072 bytes)
I1105 18:00:36.132898  315685 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2105069274
I1105 18:00:36.142166  315685 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2105069274 -xf /var/lib/minikube/build/build.2105069274.tar
I1105 18:00:36.152257  315685 crio.go:315] Building image: /var/lib/minikube/build/build.2105069274
I1105 18:00:36.152377  315685 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-762187 /var/lib/minikube/build/build.2105069274 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1105 18:00:40.812031  315685 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-762187 /var/lib/minikube/build/build.2105069274 --cgroup-manager=cgroupfs: (4.659625635s)
I1105 18:00:40.812099  315685 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2105069274
I1105 18:00:40.820867  315685 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2105069274.tar
I1105 18:00:40.829165  315685 build_images.go:217] Built localhost/my-image:functional-762187 from /tmp/build.2105069274.tar
I1105 18:00:40.829192  315685 build_images.go:133] succeeded building to: functional-762187
I1105 18:00:40.829197  315685 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-762187
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 image load --daemon kicbase/echo-server:functional-762187 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-762187 image load --daemon kicbase/echo-server:functional-762187 --alsologtostderr: (1.277386103s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 image load --daemon kicbase/echo-server:functional-762187 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-762187 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-762187 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-6vhxg" [4095434b-15e6-492b-b7c9-333860c3569e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-6vhxg" [4095434b-15e6-492b-b7c9-333860c3569e] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004423928s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-762187
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 image load --daemon kicbase/echo-server:functional-762187 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 image save kicbase/echo-server:functional-762187 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-762187 image save kicbase/echo-server:functional-762187 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr: (2.421472753s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 image rm kicbase/echo-server:functional-762187 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-762187
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 image save --daemon kicbase/echo-server:functional-762187 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-762187
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-762187 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-762187 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-762187 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-762187 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 312033: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-762187 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-762187 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [93f5dcd7-f4e5-4e97-9548-ff3843447e78] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [93f5dcd7-f4e5-4e97-9548-ff3843447e78] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.007026206s
I1105 18:00:10.731682  285188 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 service list -o json
functional_test.go:1494: Took "397.591134ms" to run "out/minikube-linux-arm64 -p functional-762187 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31548
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31548
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-762187 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.70.84 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-762187 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1105 18:00:18.964065  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "337.36067ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "57.316934ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
E1105 18:00:19.605370  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1366: Took "329.168476ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "58.702791ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-762187 /tmp/TestFunctionalparallelMountCmdany-port1446681198/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1730829619985542063" to /tmp/TestFunctionalparallelMountCmdany-port1446681198/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1730829619985542063" to /tmp/TestFunctionalparallelMountCmdany-port1446681198/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1730829619985542063" to /tmp/TestFunctionalparallelMountCmdany-port1446681198/001/test-1730829619985542063
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-762187 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (301.660641ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1105 18:00:20.287464  285188 retry.go:31] will retry after 540.562135ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "findmnt -T /mount-9p | grep 9p"
E1105 18:00:20.887178  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  5 18:00 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  5 18:00 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  5 18:00 test-1730829619985542063
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh cat /mount-9p/test-1730829619985542063
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-762187 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c7b137d4-ad61-477c-ad5d-95e870e936ce] Pending
helpers_test.go:344: "busybox-mount" [c7b137d4-ad61-477c-ad5d-95e870e936ce] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c7b137d4-ad61-477c-ad5d-95e870e936ce] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c7b137d4-ad61-477c-ad5d-95e870e936ce] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003480053s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-762187 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "sudo umount -f /mount-9p"
E1105 18:00:28.571112  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-762187 /tmp/TestFunctionalparallelMountCmdany-port1446681198/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-762187 /tmp/TestFunctionalparallelMountCmdspecific-port368339052/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-762187 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (300.056875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1105 18:00:29.167865  285188 retry.go:31] will retry after 695.0816ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-762187 /tmp/TestFunctionalparallelMountCmdspecific-port368339052/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-762187 ssh "sudo umount -f /mount-9p": exit status 1 (263.876796ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-762187 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-762187 /tmp/TestFunctionalparallelMountCmdspecific-port368339052/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-762187 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1694894958/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-762187 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1694894958/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-762187 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1694894958/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-762187 ssh "findmnt -T" /mount1: exit status 1 (530.907474ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1105 18:00:31.377623  285188 retry.go:31] will retry after 403.129915ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-762187 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-762187 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-762187 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1694894958/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-762187 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1694894958/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-762187 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1694894958/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-762187
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-762187
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-762187
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (172.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-256890 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1105 18:00:59.293934  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:01:40.256327  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:03:02.178169  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-256890 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m52.066727794s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (172.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256890 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256890 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-256890 -- rollout status deployment/busybox: (5.728746427s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256890 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256890 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256890 -- exec busybox-7dff88458-nwfsj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256890 -- exec busybox-7dff88458-r5tkv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256890 -- exec busybox-7dff88458-z4gpr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256890 -- exec busybox-7dff88458-nwfsj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256890 -- exec busybox-7dff88458-r5tkv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256890 -- exec busybox-7dff88458-z4gpr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256890 -- exec busybox-7dff88458-nwfsj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256890 -- exec busybox-7dff88458-r5tkv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256890 -- exec busybox-7dff88458-z4gpr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256890 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256890 -- exec busybox-7dff88458-nwfsj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256890 -- exec busybox-7dff88458-nwfsj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256890 -- exec busybox-7dff88458-r5tkv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256890 -- exec busybox-7dff88458-r5tkv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256890 -- exec busybox-7dff88458-z4gpr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256890 -- exec busybox-7dff88458-z4gpr -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (33.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-256890 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-256890 -v=7 --alsologtostderr: (32.45710598s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (33.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-256890 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 cp testdata/cp-test.txt ha-256890:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 cp ha-256890:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile208591027/001/cp-test_ha-256890.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 cp ha-256890:/home/docker/cp-test.txt ha-256890-m02:/home/docker/cp-test_ha-256890_ha-256890-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m02 "sudo cat /home/docker/cp-test_ha-256890_ha-256890-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 cp ha-256890:/home/docker/cp-test.txt ha-256890-m03:/home/docker/cp-test_ha-256890_ha-256890-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m03 "sudo cat /home/docker/cp-test_ha-256890_ha-256890-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 cp ha-256890:/home/docker/cp-test.txt ha-256890-m04:/home/docker/cp-test_ha-256890_ha-256890-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m04 "sudo cat /home/docker/cp-test_ha-256890_ha-256890-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 cp testdata/cp-test.txt ha-256890-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 cp ha-256890-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile208591027/001/cp-test_ha-256890-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 cp ha-256890-m02:/home/docker/cp-test.txt ha-256890:/home/docker/cp-test_ha-256890-m02_ha-256890.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890 "sudo cat /home/docker/cp-test_ha-256890-m02_ha-256890.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 cp ha-256890-m02:/home/docker/cp-test.txt ha-256890-m03:/home/docker/cp-test_ha-256890-m02_ha-256890-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m03 "sudo cat /home/docker/cp-test_ha-256890-m02_ha-256890-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 cp ha-256890-m02:/home/docker/cp-test.txt ha-256890-m04:/home/docker/cp-test_ha-256890-m02_ha-256890-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m04 "sudo cat /home/docker/cp-test_ha-256890-m02_ha-256890-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 cp testdata/cp-test.txt ha-256890-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 cp ha-256890-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile208591027/001/cp-test_ha-256890-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 cp ha-256890-m03:/home/docker/cp-test.txt ha-256890:/home/docker/cp-test_ha-256890-m03_ha-256890.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890 "sudo cat /home/docker/cp-test_ha-256890-m03_ha-256890.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 cp ha-256890-m03:/home/docker/cp-test.txt ha-256890-m02:/home/docker/cp-test_ha-256890-m03_ha-256890-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m02 "sudo cat /home/docker/cp-test_ha-256890-m03_ha-256890-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 cp ha-256890-m03:/home/docker/cp-test.txt ha-256890-m04:/home/docker/cp-test_ha-256890-m03_ha-256890-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m04 "sudo cat /home/docker/cp-test_ha-256890-m03_ha-256890-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 cp testdata/cp-test.txt ha-256890-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 cp ha-256890-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile208591027/001/cp-test_ha-256890-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 cp ha-256890-m04:/home/docker/cp-test.txt ha-256890:/home/docker/cp-test_ha-256890-m04_ha-256890.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890 "sudo cat /home/docker/cp-test_ha-256890-m04_ha-256890.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 cp ha-256890-m04:/home/docker/cp-test.txt ha-256890-m02:/home/docker/cp-test_ha-256890-m04_ha-256890-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m02 "sudo cat /home/docker/cp-test_ha-256890-m04_ha-256890-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 cp ha-256890-m04:/home/docker/cp-test.txt ha-256890-m03:/home/docker/cp-test_ha-256890-m04_ha-256890-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 ssh -n ha-256890-m03 "sudo cat /home/docker/cp-test_ha-256890-m04_ha-256890-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-256890 node stop m02 -v=7 --alsologtostderr: (11.960886016s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-256890 status -v=7 --alsologtostderr: exit status 7 (711.443657ms)

                                                
                                                
-- stdout --
	ha-256890
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-256890-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-256890-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-256890-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 18:04:52.109595  331550 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:04:52.110190  331550 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:04:52.110248  331550 out.go:358] Setting ErrFile to fd 2...
	I1105 18:04:52.110269  331550 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:04:52.110548  331550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-279806/.minikube/bin
	I1105 18:04:52.110783  331550 out.go:352] Setting JSON to false
	I1105 18:04:52.110845  331550 mustload.go:65] Loading cluster: ha-256890
	I1105 18:04:52.110943  331550 notify.go:220] Checking for updates...
	I1105 18:04:52.111333  331550 config.go:182] Loaded profile config "ha-256890": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:04:52.111372  331550 status.go:174] checking status of ha-256890 ...
	I1105 18:04:52.111966  331550 cli_runner.go:164] Run: docker container inspect ha-256890 --format={{.State.Status}}
	I1105 18:04:52.132784  331550 status.go:371] ha-256890 host status = "Running" (err=<nil>)
	I1105 18:04:52.132806  331550 host.go:66] Checking if "ha-256890" exists ...
	I1105 18:04:52.133113  331550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-256890
	I1105 18:04:52.161409  331550 host.go:66] Checking if "ha-256890" exists ...
	I1105 18:04:52.161707  331550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 18:04:52.161754  331550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890
	I1105 18:04:52.183057  331550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890/id_rsa Username:docker}
	I1105 18:04:52.274035  331550 ssh_runner.go:195] Run: systemctl --version
	I1105 18:04:52.278645  331550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:04:52.293182  331550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 18:04:52.344125  331550 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-11-05 18:04:52.334265058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1105 18:04:52.344951  331550 kubeconfig.go:125] found "ha-256890" server: "https://192.168.49.254:8443"
	I1105 18:04:52.344984  331550 api_server.go:166] Checking apiserver status ...
	I1105 18:04:52.345031  331550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:04:52.356028  331550 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	I1105 18:04:52.365215  331550 api_server.go:182] apiserver freezer: "5:freezer:/docker/2049705509a4b93d716c958a7dfd3aaa22510b2739881bd24383c55492878c14/crio/crio-62f44f1d555fa2345f71f118de901a7556b1f8c85ec0355846ac72a34429020d"
	I1105 18:04:52.365305  331550 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2049705509a4b93d716c958a7dfd3aaa22510b2739881bd24383c55492878c14/crio/crio-62f44f1d555fa2345f71f118de901a7556b1f8c85ec0355846ac72a34429020d/freezer.state
	I1105 18:04:52.375870  331550 api_server.go:204] freezer state: "THAWED"
	I1105 18:04:52.375900  331550 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1105 18:04:52.385089  331550 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1105 18:04:52.385119  331550 status.go:463] ha-256890 apiserver status = Running (err=<nil>)
	I1105 18:04:52.385129  331550 status.go:176] ha-256890 status: &{Name:ha-256890 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 18:04:52.385144  331550 status.go:174] checking status of ha-256890-m02 ...
	I1105 18:04:52.385434  331550 cli_runner.go:164] Run: docker container inspect ha-256890-m02 --format={{.State.Status}}
	I1105 18:04:52.404628  331550 status.go:371] ha-256890-m02 host status = "Stopped" (err=<nil>)
	I1105 18:04:52.404650  331550 status.go:384] host is not running, skipping remaining checks
	I1105 18:04:52.404657  331550 status.go:176] ha-256890-m02 status: &{Name:ha-256890-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 18:04:52.404675  331550 status.go:174] checking status of ha-256890-m03 ...
	I1105 18:04:52.404967  331550 cli_runner.go:164] Run: docker container inspect ha-256890-m03 --format={{.State.Status}}
	I1105 18:04:52.422495  331550 status.go:371] ha-256890-m03 host status = "Running" (err=<nil>)
	I1105 18:04:52.422518  331550 host.go:66] Checking if "ha-256890-m03" exists ...
	I1105 18:04:52.422816  331550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-256890-m03
	I1105 18:04:52.440425  331550 host.go:66] Checking if "ha-256890-m03" exists ...
	I1105 18:04:52.440772  331550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 18:04:52.440824  331550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m03
	I1105 18:04:52.458440  331550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890-m03/id_rsa Username:docker}
	I1105 18:04:52.547917  331550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:04:52.562143  331550 kubeconfig.go:125] found "ha-256890" server: "https://192.168.49.254:8443"
	I1105 18:04:52.562174  331550 api_server.go:166] Checking apiserver status ...
	I1105 18:04:52.562214  331550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:04:52.574597  331550 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1324/cgroup
	I1105 18:04:52.584108  331550 api_server.go:182] apiserver freezer: "5:freezer:/docker/0793a1d41d4396bf736fd9d8a689f6656e2ffd6353a989e5f5a97e603ea14de5/crio/crio-7ba870e35ba22a489903962fd31ececd2364a8107abc6dd3e65cd4d3aaa520ef"
	I1105 18:04:52.584180  331550 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0793a1d41d4396bf736fd9d8a689f6656e2ffd6353a989e5f5a97e603ea14de5/crio/crio-7ba870e35ba22a489903962fd31ececd2364a8107abc6dd3e65cd4d3aaa520ef/freezer.state
	I1105 18:04:52.594474  331550 api_server.go:204] freezer state: "THAWED"
	I1105 18:04:52.594504  331550 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1105 18:04:52.602370  331550 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1105 18:04:52.602403  331550 status.go:463] ha-256890-m03 apiserver status = Running (err=<nil>)
	I1105 18:04:52.602413  331550 status.go:176] ha-256890-m03 status: &{Name:ha-256890-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 18:04:52.602430  331550 status.go:174] checking status of ha-256890-m04 ...
	I1105 18:04:52.602745  331550 cli_runner.go:164] Run: docker container inspect ha-256890-m04 --format={{.State.Status}}
	I1105 18:04:52.621155  331550 status.go:371] ha-256890-m04 host status = "Running" (err=<nil>)
	I1105 18:04:52.621179  331550 host.go:66] Checking if "ha-256890-m04" exists ...
	I1105 18:04:52.621468  331550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-256890-m04
	I1105 18:04:52.637029  331550 host.go:66] Checking if "ha-256890-m04" exists ...
	I1105 18:04:52.637332  331550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 18:04:52.637376  331550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256890-m04
	I1105 18:04:52.659033  331550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/ha-256890-m04/id_rsa Username:docker}
	I1105 18:04:52.749929  331550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:04:52.761803  331550 status.go:176] ha-256890-m04 status: &{Name:ha-256890-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (25.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 node start m02 -v=7 --alsologtostderr
E1105 18:04:53.688199  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:04:53.695013  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:04:53.706704  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:04:53.728144  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:04:53.769488  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:04:53.850914  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:04:54.012753  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:04:54.334520  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:04:54.976858  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:04:56.258187  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:04:58.820462  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:05:03.942686  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:05:14.184009  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-256890 node start m02 -v=7 --alsologtostderr: (24.410324905s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 status -v=7 --alsologtostderr
E1105 18:05:18.310762  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-256890 status -v=7 --alsologtostderr: (1.413910359s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (25.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.29450361s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (196.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-256890 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-256890 -v=7 --alsologtostderr
E1105 18:05:34.665748  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:05:46.020023  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-256890 -v=7 --alsologtostderr: (36.901058795s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-256890 --wait=true -v=7 --alsologtostderr
E1105 18:06:15.627126  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:07:37.549216  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-256890 --wait=true -v=7 --alsologtostderr: (2m39.764303096s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-256890
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (196.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-256890 stop -v=7 --alsologtostderr: (35.6429899s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-256890 status -v=7 --alsologtostderr: exit status 7 (128.357022ms)

                                                
                                                
-- stdout --
	ha-256890
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-256890-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-256890-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 18:09:30.709972  346471 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:09:30.710135  346471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:09:30.710146  346471 out.go:358] Setting ErrFile to fd 2...
	I1105 18:09:30.710152  346471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:09:30.710472  346471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-279806/.minikube/bin
	I1105 18:09:30.710702  346471 out.go:352] Setting JSON to false
	I1105 18:09:30.710731  346471 mustload.go:65] Loading cluster: ha-256890
	I1105 18:09:30.711481  346471 notify.go:220] Checking for updates...
	I1105 18:09:30.711792  346471 config.go:182] Loaded profile config "ha-256890": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:09:30.711838  346471 status.go:174] checking status of ha-256890 ...
	I1105 18:09:30.712475  346471 cli_runner.go:164] Run: docker container inspect ha-256890 --format={{.State.Status}}
	I1105 18:09:30.730576  346471 status.go:371] ha-256890 host status = "Stopped" (err=<nil>)
	I1105 18:09:30.730596  346471 status.go:384] host is not running, skipping remaining checks
	I1105 18:09:30.730603  346471 status.go:176] ha-256890 status: &{Name:ha-256890 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 18:09:30.730632  346471 status.go:174] checking status of ha-256890-m02 ...
	I1105 18:09:30.730927  346471 cli_runner.go:164] Run: docker container inspect ha-256890-m02 --format={{.State.Status}}
	I1105 18:09:30.760709  346471 status.go:371] ha-256890-m02 host status = "Stopped" (err=<nil>)
	I1105 18:09:30.760730  346471 status.go:384] host is not running, skipping remaining checks
	I1105 18:09:30.760737  346471 status.go:176] ha-256890-m02 status: &{Name:ha-256890-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 18:09:30.760756  346471 status.go:174] checking status of ha-256890-m04 ...
	I1105 18:09:30.761047  346471 cli_runner.go:164] Run: docker container inspect ha-256890-m04 --format={{.State.Status}}
	I1105 18:09:30.779649  346471 status.go:371] ha-256890-m04 host status = "Stopped" (err=<nil>)
	I1105 18:09:30.779668  346471 status.go:384] host is not running, skipping remaining checks
	I1105 18:09:30.779675  346471 status.go:176] ha-256890-m04 status: &{Name:ha-256890-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (62.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-256890 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1105 18:09:53.688659  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:10:18.310458  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:10:21.390826  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-256890 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m1.441832577s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (62.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-256890 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-256890 --control-plane -v=7 --alsologtostderr: (1m13.971141745s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-256890 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-630029 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-630029 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (46.854746666s)
--- PASS: TestJSONOutput/start/Command (46.86s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-630029 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-630029 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.9s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-630029 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-630029 --output=json --user=testUser: (5.898152379s)
--- PASS: TestJSONOutput/stop/Command (5.90s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-881100 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-881100 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.012982ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"34964119-2636-4a2a-9210-d47b2358fa3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-881100] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f0fab38b-d114-4b85-9fa8-68c2b5093b14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19910"}}
	{"specversion":"1.0","id":"ed9f1ca8-03f0-4612-8a06-5403a4beb2d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"523e5acb-587a-4072-bad8-80d13db7eb39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19910-279806/kubeconfig"}}
	{"specversion":"1.0","id":"4ce1d32b-6d11-4785-ae8c-b3974845d0fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-279806/.minikube"}}
	{"specversion":"1.0","id":"733cfa04-74fa-4934-ac46-2a7a9534f7d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"2bceb281-555b-4477-83d6-f71317e4762c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2591a579-8a0d-4023-883b-5392bd371012","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-881100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-881100
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.13s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-159217 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-159217 --network=: (36.058553789s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-159217" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-159217
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-159217: (2.045780659s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.13s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-136867 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-136867 --network=bridge: (28.346576132s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-136867" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-136867
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-136867: (1.947322869s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.32s)

                                                
                                    
x
+
TestKicExistingNetwork (34.64s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1105 18:14:07.912523  285188 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1105 18:14:07.926405  285188 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1105 18:14:07.926485  285188 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1105 18:14:07.926501  285188 cli_runner.go:164] Run: docker network inspect existing-network
W1105 18:14:07.941758  285188 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1105 18:14:07.941789  285188 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1105 18:14:07.941805  285188 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1105 18:14:07.942000  285188 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1105 18:14:07.958092  285188 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-620990126bf3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:23:45:c8:71} reservation:<nil>}
I1105 18:14:07.958495  285188 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000770dd0}
I1105 18:14:07.958522  285188 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1105 18:14:07.958573  285188 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1105 18:14:08.025832  285188 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-854455 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-854455 --network=existing-network: (32.458995702s)
helpers_test.go:175: Cleaning up "existing-network-854455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-854455
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-854455: (2.038419782s)
I1105 18:14:42.539122  285188 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.64s)

                                                
                                    
x
+
TestKicCustomSubnet (32.64s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-702883 --subnet=192.168.60.0/24
E1105 18:14:53.688638  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-702883 --subnet=192.168.60.0/24: (30.504890264s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-702883 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-702883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-702883
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-702883: (2.108373607s)
--- PASS: TestKicCustomSubnet (32.64s)

                                                
                                    
x
+
TestKicStaticIP (30.91s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-269190 --static-ip=192.168.200.200
E1105 18:15:18.310905  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-269190 --static-ip=192.168.200.200: (28.73014169s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-269190 ip
helpers_test.go:175: Cleaning up "static-ip-269190" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-269190
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-269190: (2.02164262s)
--- PASS: TestKicStaticIP (30.91s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (66.21s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-158848 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-158848 --driver=docker  --container-runtime=crio: (28.703515684s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-161416 --driver=docker  --container-runtime=crio
E1105 18:16:41.382727  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-161416 --driver=docker  --container-runtime=crio: (31.87249624s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-158848
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-161416
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-161416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-161416
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-161416: (2.020954592s)
helpers_test.go:175: Cleaning up "first-158848" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-158848
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-158848: (2.319416971s)
--- PASS: TestMinikubeProfile (66.21s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-994722 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-994722 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.332858565s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-994722 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-996807 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-996807 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (9.381409276s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-996807 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-994722 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-994722 --alsologtostderr -v=5: (1.610577091s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-996807 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-996807
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-996807: (1.195122309s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.16s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-996807
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-996807: (7.161462956s)
--- PASS: TestMountStart/serial/RestartStopped (8.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-996807 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (81.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-729216 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-729216 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m20.926940822s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (81.42s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-729216 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-729216 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-729216 -- rollout status deployment/busybox: (5.446035291s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-729216 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-729216 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-729216 -- exec busybox-7dff88458-2tm4d -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-729216 -- exec busybox-7dff88458-qbs6z -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-729216 -- exec busybox-7dff88458-2tm4d -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-729216 -- exec busybox-7dff88458-qbs6z -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-729216 -- exec busybox-7dff88458-2tm4d -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-729216 -- exec busybox-7dff88458-qbs6z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.41s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-729216 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-729216 -- exec busybox-7dff88458-2tm4d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-729216 -- exec busybox-7dff88458-2tm4d -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-729216 -- exec busybox-7dff88458-qbs6z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-729216 -- exec busybox-7dff88458-qbs6z -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-729216 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-729216 -v 3 --alsologtostderr: (28.882548686s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.52s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-729216 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 cp testdata/cp-test.txt multinode-729216:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 ssh -n multinode-729216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 cp multinode-729216:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1859154665/001/cp-test_multinode-729216.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 ssh -n multinode-729216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 cp multinode-729216:/home/docker/cp-test.txt multinode-729216-m02:/home/docker/cp-test_multinode-729216_multinode-729216-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 ssh -n multinode-729216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 ssh -n multinode-729216-m02 "sudo cat /home/docker/cp-test_multinode-729216_multinode-729216-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 cp multinode-729216:/home/docker/cp-test.txt multinode-729216-m03:/home/docker/cp-test_multinode-729216_multinode-729216-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 ssh -n multinode-729216 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 ssh -n multinode-729216-m03 "sudo cat /home/docker/cp-test_multinode-729216_multinode-729216-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 cp testdata/cp-test.txt multinode-729216-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 ssh -n multinode-729216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 cp multinode-729216-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1859154665/001/cp-test_multinode-729216-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 ssh -n multinode-729216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 cp multinode-729216-m02:/home/docker/cp-test.txt multinode-729216:/home/docker/cp-test_multinode-729216-m02_multinode-729216.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 ssh -n multinode-729216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 ssh -n multinode-729216 "sudo cat /home/docker/cp-test_multinode-729216-m02_multinode-729216.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 cp multinode-729216-m02:/home/docker/cp-test.txt multinode-729216-m03:/home/docker/cp-test_multinode-729216-m02_multinode-729216-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 ssh -n multinode-729216-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 ssh -n multinode-729216-m03 "sudo cat /home/docker/cp-test_multinode-729216-m02_multinode-729216-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 cp testdata/cp-test.txt multinode-729216-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 ssh -n multinode-729216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 cp multinode-729216-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1859154665/001/cp-test_multinode-729216-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 ssh -n multinode-729216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 cp multinode-729216-m03:/home/docker/cp-test.txt multinode-729216:/home/docker/cp-test_multinode-729216-m03_multinode-729216.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 ssh -n multinode-729216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 ssh -n multinode-729216 "sudo cat /home/docker/cp-test_multinode-729216-m03_multinode-729216.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 cp multinode-729216-m03:/home/docker/cp-test.txt multinode-729216-m02:/home/docker/cp-test_multinode-729216-m03_multinode-729216-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 ssh -n multinode-729216-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 ssh -n multinode-729216-m02 "sudo cat /home/docker/cp-test_multinode-729216-m03_multinode-729216-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-729216 node stop m03: (1.210595247s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-729216 status: exit status 7 (478.023117ms)

                                                
                                                
-- stdout --
	multinode-729216
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-729216-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-729216-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-729216 status --alsologtostderr: exit status 7 (483.602642ms)

                                                
                                                
-- stdout --
	multinode-729216
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-729216-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-729216-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 18:19:34.129600  399879 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:19:34.129729  399879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:19:34.129734  399879 out.go:358] Setting ErrFile to fd 2...
	I1105 18:19:34.129739  399879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:19:34.130148  399879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-279806/.minikube/bin
	I1105 18:19:34.130416  399879 out.go:352] Setting JSON to false
	I1105 18:19:34.130463  399879 mustload.go:65] Loading cluster: multinode-729216
	I1105 18:19:34.131373  399879 config.go:182] Loaded profile config "multinode-729216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:19:34.131401  399879 status.go:174] checking status of multinode-729216 ...
	I1105 18:19:34.132117  399879 notify.go:220] Checking for updates...
	I1105 18:19:34.132827  399879 cli_runner.go:164] Run: docker container inspect multinode-729216 --format={{.State.Status}}
	I1105 18:19:34.151572  399879 status.go:371] multinode-729216 host status = "Running" (err=<nil>)
	I1105 18:19:34.151597  399879 host.go:66] Checking if "multinode-729216" exists ...
	I1105 18:19:34.151899  399879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-729216
	I1105 18:19:34.168504  399879 host.go:66] Checking if "multinode-729216" exists ...
	I1105 18:19:34.168873  399879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 18:19:34.168937  399879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-729216
	I1105 18:19:34.197612  399879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33270 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/multinode-729216/id_rsa Username:docker}
	I1105 18:19:34.285815  399879 ssh_runner.go:195] Run: systemctl --version
	I1105 18:19:34.289959  399879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:19:34.301581  399879 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 18:19:34.358331  399879 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:61 SystemTime:2024-11-05 18:19:34.348883081 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1105 18:19:34.358938  399879 kubeconfig.go:125] found "multinode-729216" server: "https://192.168.67.2:8443"
	I1105 18:19:34.358977  399879 api_server.go:166] Checking apiserver status ...
	I1105 18:19:34.359026  399879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 18:19:34.369732  399879 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1392/cgroup
	I1105 18:19:34.379066  399879 api_server.go:182] apiserver freezer: "5:freezer:/docker/2c4118cb6ea5782f0825103b34e14c723084f6354df6e3c709e312350a0db758/crio/crio-bd4fd4b93f2bddda614117ba1ab9b4c8a6815db77fdc4fcdb7a1cd0305ffe10a"
	I1105 18:19:34.379155  399879 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2c4118cb6ea5782f0825103b34e14c723084f6354df6e3c709e312350a0db758/crio/crio-bd4fd4b93f2bddda614117ba1ab9b4c8a6815db77fdc4fcdb7a1cd0305ffe10a/freezer.state
	I1105 18:19:34.388005  399879 api_server.go:204] freezer state: "THAWED"
	I1105 18:19:34.388034  399879 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1105 18:19:34.395691  399879 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1105 18:19:34.395719  399879 status.go:463] multinode-729216 apiserver status = Running (err=<nil>)
	I1105 18:19:34.395730  399879 status.go:176] multinode-729216 status: &{Name:multinode-729216 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 18:19:34.395747  399879 status.go:174] checking status of multinode-729216-m02 ...
	I1105 18:19:34.396054  399879 cli_runner.go:164] Run: docker container inspect multinode-729216-m02 --format={{.State.Status}}
	I1105 18:19:34.412043  399879 status.go:371] multinode-729216-m02 host status = "Running" (err=<nil>)
	I1105 18:19:34.412110  399879 host.go:66] Checking if "multinode-729216-m02" exists ...
	I1105 18:19:34.412422  399879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-729216-m02
	I1105 18:19:34.428438  399879 host.go:66] Checking if "multinode-729216-m02" exists ...
	I1105 18:19:34.428862  399879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 18:19:34.428910  399879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-729216-m02
	I1105 18:19:34.445225  399879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33275 SSHKeyPath:/home/jenkins/minikube-integration/19910-279806/.minikube/machines/multinode-729216-m02/id_rsa Username:docker}
	I1105 18:19:34.529496  399879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 18:19:34.541208  399879 status.go:176] multinode-729216-m02 status: &{Name:multinode-729216-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1105 18:19:34.541251  399879 status.go:174] checking status of multinode-729216-m03 ...
	I1105 18:19:34.541565  399879 cli_runner.go:164] Run: docker container inspect multinode-729216-m03 --format={{.State.Status}}
	I1105 18:19:34.558125  399879 status.go:371] multinode-729216-m03 host status = "Stopped" (err=<nil>)
	I1105 18:19:34.558158  399879 status.go:384] host is not running, skipping remaining checks
	I1105 18:19:34.558165  399879 status.go:176] multinode-729216-m03 status: &{Name:multinode-729216-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-729216 node start m03 -v=7 --alsologtostderr: (9.059700776s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-729216
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-729216
E1105 18:19:53.688505  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-729216: (24.771740526s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-729216 --wait=true -v=8 --alsologtostderr
E1105 18:20:18.310855  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:21:16.752104  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-729216 --wait=true -v=8 --alsologtostderr: (1m9.095036723s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-729216
--- PASS: TestMultiNode/serial/RestartKeepsNodes (94.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-729216 node delete m03: (4.706992616s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-729216 stop: (23.688670724s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-729216 status: exit status 7 (100.890631ms)

                                                
                                                
-- stdout --
	multinode-729216
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-729216-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-729216 status --alsologtostderr: exit status 7 (91.085612ms)

                                                
                                                
-- stdout --
	multinode-729216
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-729216-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 18:21:47.566773  407596 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:21:47.566963  407596 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:21:47.566992  407596 out.go:358] Setting ErrFile to fd 2...
	I1105 18:21:47.567014  407596 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:21:47.567376  407596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-279806/.minikube/bin
	I1105 18:21:47.567630  407596 out.go:352] Setting JSON to false
	I1105 18:21:47.567694  407596 mustload.go:65] Loading cluster: multinode-729216
	I1105 18:21:47.568374  407596 config.go:182] Loaded profile config "multinode-729216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:21:47.568420  407596 status.go:174] checking status of multinode-729216 ...
	I1105 18:21:47.569250  407596 cli_runner.go:164] Run: docker container inspect multinode-729216 --format={{.State.Status}}
	I1105 18:21:47.570156  407596 notify.go:220] Checking for updates...
	I1105 18:21:47.587647  407596 status.go:371] multinode-729216 host status = "Stopped" (err=<nil>)
	I1105 18:21:47.587671  407596 status.go:384] host is not running, skipping remaining checks
	I1105 18:21:47.587678  407596 status.go:176] multinode-729216 status: &{Name:multinode-729216 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1105 18:21:47.587711  407596 status.go:174] checking status of multinode-729216-m02 ...
	I1105 18:21:47.588015  407596 cli_runner.go:164] Run: docker container inspect multinode-729216-m02 --format={{.State.Status}}
	I1105 18:21:47.606230  407596 status.go:371] multinode-729216-m02 host status = "Stopped" (err=<nil>)
	I1105 18:21:47.606254  407596 status.go:384] host is not running, skipping remaining checks
	I1105 18:21:47.606262  407596 status.go:176] multinode-729216-m02 status: &{Name:multinode-729216-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-729216 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-729216 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (55.179644504s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-729216 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.82s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-729216
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-729216-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-729216-m02 --driver=docker  --container-runtime=crio: exit status 14 (90.84933ms)

                                                
                                                
-- stdout --
	* [multinode-729216-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19910
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19910-279806/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-279806/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-729216-m02' is duplicated with machine name 'multinode-729216-m02' in profile 'multinode-729216'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-729216-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-729216-m03 --driver=docker  --container-runtime=crio: (28.813289464s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-729216
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-729216: exit status 80 (325.453426ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-729216 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-729216-m03 already exists in multinode-729216-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-729216-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-729216-m03: (1.910401331s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.19s)

                                                
                                    
x
+
TestPreload (129.65s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-785160 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1105 18:24:53.688417  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-785160 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m36.395697143s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-785160 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-785160 image pull gcr.io/k8s-minikube/busybox: (3.354887706s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-785160
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-785160: (5.791717516s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-785160 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1105 18:25:18.310423  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-785160 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (21.372032293s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-785160 image list
helpers_test.go:175: Cleaning up "test-preload-785160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-785160
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-785160: (2.403374114s)
--- PASS: TestPreload (129.65s)

                                                
                                    
x
+
TestScheduledStopUnix (105.6s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-668164 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-668164 --memory=2048 --driver=docker  --container-runtime=crio: (29.152008012s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-668164 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-668164 -n scheduled-stop-668164
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-668164 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1105 18:25:58.007569  285188 retry.go:31] will retry after 77.525µs: open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/scheduled-stop-668164/pid: no such file or directory
I1105 18:25:58.009557  285188 retry.go:31] will retry after 131.468µs: open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/scheduled-stop-668164/pid: no such file or directory
I1105 18:25:58.010698  285188 retry.go:31] will retry after 281.217µs: open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/scheduled-stop-668164/pid: no such file or directory
I1105 18:25:58.011848  285188 retry.go:31] will retry after 399.641µs: open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/scheduled-stop-668164/pid: no such file or directory
I1105 18:25:58.012986  285188 retry.go:31] will retry after 273.717µs: open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/scheduled-stop-668164/pid: no such file or directory
I1105 18:25:58.020736  285188 retry.go:31] will retry after 762.924µs: open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/scheduled-stop-668164/pid: no such file or directory
I1105 18:25:58.021866  285188 retry.go:31] will retry after 1.539868ms: open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/scheduled-stop-668164/pid: no such file or directory
I1105 18:25:58.024178  285188 retry.go:31] will retry after 1.753257ms: open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/scheduled-stop-668164/pid: no such file or directory
I1105 18:25:58.026388  285188 retry.go:31] will retry after 3.823626ms: open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/scheduled-stop-668164/pid: no such file or directory
I1105 18:25:58.030674  285188 retry.go:31] will retry after 4.775191ms: open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/scheduled-stop-668164/pid: no such file or directory
I1105 18:25:58.035901  285188 retry.go:31] will retry after 3.151301ms: open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/scheduled-stop-668164/pid: no such file or directory
I1105 18:25:58.040114  285188 retry.go:31] will retry after 10.4399ms: open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/scheduled-stop-668164/pid: no such file or directory
I1105 18:25:58.051453  285188 retry.go:31] will retry after 10.332348ms: open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/scheduled-stop-668164/pid: no such file or directory
I1105 18:25:58.062701  285188 retry.go:31] will retry after 25.485111ms: open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/scheduled-stop-668164/pid: no such file or directory
I1105 18:25:58.088931  285188 retry.go:31] will retry after 25.969731ms: open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/scheduled-stop-668164/pid: no such file or directory
I1105 18:25:58.115167  285188 retry.go:31] will retry after 34.709592ms: open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/scheduled-stop-668164/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-668164 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-668164 -n scheduled-stop-668164
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-668164
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-668164 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-668164
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-668164: exit status 7 (73.17982ms)

                                                
                                                
-- stdout --
	scheduled-stop-668164
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-668164 -n scheduled-stop-668164
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-668164 -n scheduled-stop-668164: exit status 7 (77.168539ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-668164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-668164
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-668164: (4.922702098s)
--- PASS: TestScheduledStopUnix (105.60s)

                                                
                                    
x
+
TestInsufficientStorage (10.44s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-575468 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-575468 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.004360354s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"99f54a28-059a-4b46-bcd6-eb5859c53827","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-575468] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3ea83e5a-7b12-4561-a160-972a2d22aaae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19910"}}
	{"specversion":"1.0","id":"05af74b2-e69c-4fbe-a4b0-81144e796ec7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e34581f2-a0ca-4075-a6a6-e52b242b3f7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19910-279806/kubeconfig"}}
	{"specversion":"1.0","id":"a6d9c82f-b3f4-4c8a-b1db-41080ca3dfc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-279806/.minikube"}}
	{"specversion":"1.0","id":"12dc92bc-e416-4dc5-b374-bc03b7b670f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"ee43f4a2-91d8-443e-a443-69415beb47b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2a30f5c3-b442-4c00-8260-fb60ee984bd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e51fc32a-ad4f-48b1-9fef-6847e5a98f44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9607e130-b72e-48b6-aa1e-737535cb3e18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ca488e07-9574-4bfe-8b4a-c10a05cd3aac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"cf253e4f-1bc7-4a15-a050-5283cffad7d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-575468\" primary control-plane node in \"insufficient-storage-575468\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f8554a72-e9cb-4197-aa70-4a4ce877174b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1730282848-19883 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"dee8f629-d25b-4e26-99b2-c7b7b0a83396","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7772cb28-b6e7-4e16-9170-deb5b291e143","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-575468 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-575468 --output=json --layout=cluster: exit status 7 (272.474177ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-575468","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-575468","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 18:27:22.221630  425271 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-575468" does not appear in /home/jenkins/minikube-integration/19910-279806/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-575468 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-575468 --output=json --layout=cluster: exit status 7 (264.915349ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-575468","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-575468","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1105 18:27:22.488732  425332 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-575468" does not appear in /home/jenkins/minikube-integration/19910-279806/kubeconfig
	E1105 18:27:22.498271  425332 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/insufficient-storage-575468/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-575468" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-575468
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-575468: (1.89739984s)
--- PASS: TestInsufficientStorage (10.44s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (81.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3029514649 start -p running-upgrade-282221 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3029514649 start -p running-upgrade-282221 --memory=2200 --vm-driver=docker  --container-runtime=crio: (35.630099014s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-282221 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-282221 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.17034641s)
helpers_test.go:175: Cleaning up "running-upgrade-282221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-282221
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-282221: (2.820438122s)
--- PASS: TestRunningBinaryUpgrade (81.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (398.28s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-997752 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-997752 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m14.645241587s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-997752
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-997752: (2.236665614s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-997752 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-997752 status --format={{.Host}}: exit status 7 (89.04883ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-997752 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1105 18:29:53.688263  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-997752 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m37.077490493s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-997752 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-997752 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-997752 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (131.127838ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-997752] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19910
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19910-279806/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-279806/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-997752
	    minikube start -p kubernetes-upgrade-997752 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9977522 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-997752 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-997752 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1105 18:34:53.688029  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-997752 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.430220751s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-997752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-997752
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-997752: (2.538054709s)
--- PASS: TestKubernetesUpgrade (398.28s)

                                                
                                    
x
+
TestMissingContainerUpgrade (166.63s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3698680597 start -p missing-upgrade-622601 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3698680597 start -p missing-upgrade-622601 --memory=2200 --driver=docker  --container-runtime=crio: (1m27.649594872s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-622601
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-622601: (10.388506861s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-622601
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-622601 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-622601 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m5.317059364s)
helpers_test.go:175: Cleaning up "missing-upgrade-622601" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-622601
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-622601: (2.308053204s)
--- PASS: TestMissingContainerUpgrade (166.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-003795 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-003795 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (100.151988ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-003795] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19910
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19910-279806/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-279806/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-003795 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-003795 --driver=docker  --container-runtime=crio: (36.011523538s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-003795 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-003795 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-003795 --no-kubernetes --driver=docker  --container-runtime=crio: (7.468486918s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-003795 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-003795 status -o json: exit status 2 (274.422321ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-003795","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-003795
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-003795: (1.922536197s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-003795 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-003795 --no-kubernetes --driver=docker  --container-runtime=crio: (8.019527367s)
--- PASS: TestNoKubernetes/serial/Start (8.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-003795 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-003795 "sudo systemctl is-active --quiet service kubelet": exit status 1 (429.602387ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-003795
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-003795: (1.251105676s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-003795 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-003795 --driver=docker  --container-runtime=crio: (8.201273225s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-003795 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-003795 "sudo systemctl is-active --quiet service kubelet": exit status 1 (320.764311ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (83.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2777281708 start -p stopped-upgrade-257463 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1105 18:30:18.310917  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2777281708 start -p stopped-upgrade-257463 --memory=2200 --vm-driver=docker  --container-runtime=crio: (50.709083603s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2777281708 -p stopped-upgrade-257463 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2777281708 -p stopped-upgrade-257463 stop: (2.60271974s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-257463 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-257463 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.250555566s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (83.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-257463
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                    
x
+
TestPause/serial/Start (53.48s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-439057 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1105 18:33:21.385785  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-439057 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (53.480497064s)
--- PASS: TestPause/serial/Start (53.48s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (20.58s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-439057 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-439057 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.558408498s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (20.58s)

                                                
                                    
x
+
TestPause/serial/Pause (1.16s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-439057 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-439057 --alsologtostderr -v=5: (1.163188209s)
--- PASS: TestPause/serial/Pause (1.16s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-439057 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-439057 --output=json --layout=cluster: exit status 2 (491.955056ms)

                                                
                                                
-- stdout --
	{"Name":"pause-439057","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-439057","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.49s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.96s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-439057 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.96s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.36s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-439057 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-439057 --alsologtostderr -v=5: (1.361613321s)
--- PASS: TestPause/serial/PauseAgain (1.36s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.36s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-439057 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-439057 --alsologtostderr -v=5: (3.363426596s)
--- PASS: TestPause/serial/DeletePaused (3.36s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.02s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-439057
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-439057: exit status 1 (21.026208ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-439057: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-691540 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-691540 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (169.198474ms)

                                                
                                                
-- stdout --
	* [false-691540] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19910
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19910-279806/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-279806/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1105 18:35:06.645089  465037 out.go:345] Setting OutFile to fd 1 ...
	I1105 18:35:06.645230  465037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:35:06.645243  465037 out.go:358] Setting ErrFile to fd 2...
	I1105 18:35:06.645248  465037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 18:35:06.645532  465037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-279806/.minikube/bin
	I1105 18:35:06.645968  465037 out.go:352] Setting JSON to false
	I1105 18:35:06.647015  465037 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8250,"bootTime":1730823457,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1105 18:35:06.647088  465037 start.go:139] virtualization:  
	I1105 18:35:06.649841  465037 out.go:177] * [false-691540] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1105 18:35:06.651471  465037 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 18:35:06.651535  465037 notify.go:220] Checking for updates...
	I1105 18:35:06.654200  465037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 18:35:06.655559  465037 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-279806/kubeconfig
	I1105 18:35:06.656935  465037 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-279806/.minikube
	I1105 18:35:06.658535  465037 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1105 18:35:06.660063  465037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 18:35:06.662063  465037 config.go:182] Loaded profile config "kubernetes-upgrade-997752": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 18:35:06.662212  465037 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 18:35:06.684856  465037 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1105 18:35:06.684984  465037 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 18:35:06.737810  465037 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2024-11-05 18:35:06.726744212 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1105 18:35:06.737921  465037 docker.go:318] overlay module found
	I1105 18:35:06.739621  465037 out.go:177] * Using the docker driver based on user configuration
	I1105 18:35:06.741612  465037 start.go:297] selected driver: docker
	I1105 18:35:06.741648  465037 start.go:901] validating driver "docker" against <nil>
	I1105 18:35:06.741678  465037 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 18:35:06.744360  465037 out.go:201] 
	W1105 18:35:06.746234  465037 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1105 18:35:06.748234  465037 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-691540 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-691540

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-691540

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-691540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-691540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-691540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-691540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-691540

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-691540

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-691540

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-691540

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-691540

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-691540" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-691540" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 05 Nov 2024 18:34:34 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-997752
contexts:
- context:
cluster: kubernetes-upgrade-997752
extensions:
- extension:
last-update: Tue, 05 Nov 2024 18:34:34 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: kubernetes-upgrade-997752
name: kubernetes-upgrade-997752
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-997752
user:
client-certificate: /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/kubernetes-upgrade-997752/client.crt
client-key: /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/kubernetes-upgrade-997752/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-691540

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-691540"

                                                
                                                
----------------------- debugLogs end: false-691540 [took: 4.246185276s] --------------------------------
helpers_test.go:175: Cleaning up "false-691540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-691540
--- PASS: TestNetworkPlugins/group/false (4.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (153.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-187534 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E1105 18:37:56.754204  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-187534 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m33.551434366s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (153.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-187534 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3adb27db-14a4-4561-9f67-805ef4e8ff37] Pending
helpers_test.go:344: "busybox" [3adb27db-14a4-4561-9f67-805ef4e8ff37] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3adb27db-14a4-4561-9f67-805ef4e8ff37] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.005556031s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-187534 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-187534 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-187534 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.980651229s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-187534 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-187534 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-187534 --alsologtostderr -v=3: (12.353508432s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-576633 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-576633 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (1m8.974114352s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-187534 -n old-k8s-version-187534
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-187534 -n old-k8s-version-187534: exit status 7 (79.244364ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-187534 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (35.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-187534 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E1105 18:39:53.688696  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-187534 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (34.486965187s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-187534 -n old-k8s-version-187534
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (35.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1105 18:40:18.310101  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-cd95d586-hwnln" [6a7551f1-a942-4565-ac0b-dc45a8ef4b76] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-cd95d586-hwnln" [6a7551f1-a942-4565-ac0b-dc45a8ef4b76] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 26.003564768s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (26.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-hwnln" [6a7551f1-a942-4565-ac0b-dc45a8ef4b76] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004429416s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-187534 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-576633 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [400c851c-15c2-4977-82d5-bb1d235f5a20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [400c851c-15c2-4977-82d5-bb1d235f5a20] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.00423927s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-576633 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-187534 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-187534 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-187534 -n old-k8s-version-187534
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-187534 -n old-k8s-version-187534: exit status 2 (370.895052ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-187534 -n old-k8s-version-187534
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-187534 -n old-k8s-version-187534: exit status 2 (354.969784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-187534 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-187534 -n old-k8s-version-187534
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-187534 -n old-k8s-version-187534
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (53.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-110092 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-110092 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (53.285201122s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (53.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-576633 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-576633 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.190234676s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-576633 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-576633 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-576633 --alsologtostderr -v=3: (12.130825694s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-576633 -n no-preload-576633
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-576633 -n no-preload-576633: exit status 7 (91.630783ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-576633 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (279.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-576633 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-576633 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m39.333914359s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-576633 -n no-preload-576633
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (279.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-110092 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d6a971ae-3fd9-4d3e-a3ae-50f405a53585] Pending
helpers_test.go:344: "busybox" [d6a971ae-3fd9-4d3e-a3ae-50f405a53585] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d6a971ae-3fd9-4d3e-a3ae-50f405a53585] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003330364s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-110092 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-110092 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-110092 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-110092 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-110092 --alsologtostderr -v=3: (12.009479302s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-110092 -n embed-certs-110092
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-110092 -n embed-certs-110092: exit status 7 (82.745301ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-110092 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (302.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-110092 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1105 18:44:05.677759  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/old-k8s-version-187534/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:44:05.684132  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/old-k8s-version-187534/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:44:05.695627  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/old-k8s-version-187534/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:44:05.717049  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/old-k8s-version-187534/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:44:05.758392  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/old-k8s-version-187534/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:44:05.839946  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/old-k8s-version-187534/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:44:06.001477  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/old-k8s-version-187534/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:44:06.322937  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/old-k8s-version-187534/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:44:06.965110  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/old-k8s-version-187534/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:44:08.246525  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/old-k8s-version-187534/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:44:10.808418  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/old-k8s-version-187534/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:44:15.930036  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/old-k8s-version-187534/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:44:26.172114  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/old-k8s-version-187534/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:44:46.653527  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/old-k8s-version-187534/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:44:53.688645  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:45:18.310391  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:45:27.615252  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/old-k8s-version-187534/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-110092 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (5m2.191168263s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-110092 -n embed-certs-110092
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (302.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zffmj" [b3fdce78-c79f-4de2-a418-bca56822949a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003223991s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zffmj" [b3fdce78-c79f-4de2-a418-bca56822949a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003760586s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-576633 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-576633 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-576633 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-576633 -n no-preload-576633
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-576633 -n no-preload-576633: exit status 2 (331.523754ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-576633 -n no-preload-576633
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-576633 -n no-preload-576633: exit status 2 (314.261448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-576633 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-576633 -n no-preload-576633
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-576633 -n no-preload-576633
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-253764 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-253764 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (50.175058547s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-253764 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bb28c0ea-fe6c-40e2-a626-d6be856da840] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1105 18:46:49.536751  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/old-k8s-version-187534/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [bb28c0ea-fe6c-40e2-a626-d6be856da840] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004766792s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-253764 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-253764 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-253764 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-253764 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-253764 --alsologtostderr -v=3: (11.969667714s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-6lw9s" [c60ef711-272b-4cfd-8df0-b12ec937e3a9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00376039s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-6lw9s" [c60ef711-272b-4cfd-8df0-b12ec937e3a9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005090664s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-110092 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-253764 -n default-k8s-diff-port-253764
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-253764 -n default-k8s-diff-port-253764: exit status 7 (75.602084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-253764 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (298.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-253764 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-253764 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m57.963877368s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-253764 -n default-k8s-diff-port-253764
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (298.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-110092 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-110092 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-110092 --alsologtostderr -v=1: (1.012849015s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-110092 -n embed-certs-110092
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-110092 -n embed-certs-110092: exit status 2 (365.847741ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-110092 -n embed-certs-110092
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-110092 -n embed-certs-110092: exit status 2 (418.436802ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-110092 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-110092 -n embed-certs-110092
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-110092 -n embed-certs-110092
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-779653 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-779653 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (40.406363469s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-779653 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-779653 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.628502915s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-779653 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-779653 --alsologtostderr -v=3: (1.998398265s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-779653 -n newest-cni-779653
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-779653 -n newest-cni-779653: exit status 7 (79.606393ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-779653 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-779653 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-779653 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (16.006101913s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-779653 -n newest-cni-779653
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-779653 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-779653 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-779653 -n newest-cni-779653
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-779653 -n newest-cni-779653: exit status 2 (343.460619ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-779653 -n newest-cni-779653
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-779653 -n newest-cni-779653: exit status 2 (318.076159ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-779653 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-779653 -n newest-cni-779653
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-779653 -n newest-cni-779653
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-691540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1105 18:49:05.677211  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/old-k8s-version-187534/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-691540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (53.561645668s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-691540 "pgrep -a kubelet"
I1105 18:49:22.039746  285188 config.go:182] Loaded profile config "auto-691540": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-691540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-w6cct" [c67260ff-2bd1-4288-9b09-ca8de0733d5d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-w6cct" [c67260ff-2bd1-4288-9b09-ca8de0733d5d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.0053237s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-691540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-691540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-691540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-691540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1105 18:49:53.688551  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:50:01.388064  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:50:18.310871  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/addons-638421/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:50:35.310101  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/no-preload-576633/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:50:35.317575  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/no-preload-576633/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:50:35.329067  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/no-preload-576633/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:50:35.350421  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/no-preload-576633/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:50:35.391955  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/no-preload-576633/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:50:35.473878  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/no-preload-576633/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:50:35.635418  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/no-preload-576633/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:50:35.957305  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/no-preload-576633/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:50:36.598983  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/no-preload-576633/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:50:37.880328  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/no-preload-576633/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:50:40.442383  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/no-preload-576633/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:50:45.563891  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/no-preload-576633/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-691540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (54.74819113s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9kw7v" [97426a35-3585-431d-81f4-3a8b3953c456] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004425379s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-691540 "pgrep -a kubelet"
I1105 18:50:53.997762  285188 config.go:182] Loaded profile config "flannel-691540": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-691540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-srgcp" [f9d88e2e-f1f6-4d1a-a3a8-c42d83fad406] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1105 18:50:55.805580  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/no-preload-576633/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-srgcp" [f9d88e2e-f1f6-4d1a-a3a8-c42d83fad406] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003655288s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-691540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-691540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-691540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (62.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-691540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1105 18:51:57.249095  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/no-preload-576633/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-691540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m2.253111044s)
--- PASS: TestNetworkPlugins/group/calico/Start (62.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hx22t" [7b77d969-6bab-4d85-aca1-3e67bfd9f2ce] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004346424s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hx22t" [7b77d969-6bab-4d85-aca1-3e67bfd9f2ce] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004075553s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-253764 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-253764 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-253764 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-253764 -n default-k8s-diff-port-253764
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-253764 -n default-k8s-diff-port-253764: exit status 2 (349.558485ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-253764 -n default-k8s-diff-port-253764
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-253764 -n default-k8s-diff-port-253764: exit status 2 (373.754757ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-253764 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-253764 -n default-k8s-diff-port-253764
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-253764 -n default-k8s-diff-port-253764
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.71s)
E1105 18:55:35.311091  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/no-preload-576633/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:55:44.235539  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/auto-691540/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:55:47.714382  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/flannel-691540/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:55:47.720779  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/flannel-691540/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:55:47.732229  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/flannel-691540/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:55:47.753676  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/flannel-691540/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:55:47.795166  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/flannel-691540/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:55:47.877414  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/flannel-691540/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:55:48.038922  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/flannel-691540/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:55:48.360568  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/flannel-691540/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:55:49.002647  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/flannel-691540/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:55:50.284269  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/flannel-691540/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:55:52.846238  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/flannel-691540/client.crt: no such file or directory" logger="UnhandledError"
E1105 18:55:57.968122  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/flannel-691540/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-691540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-691540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (59.57922835s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-nfjxm" [febc5884-903a-422c-ae94-f9b5d77238cc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005023376s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-691540 "pgrep -a kubelet"
I1105 18:52:34.941446  285188 config.go:182] Loaded profile config "calico-691540": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-691540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hh9rd" [97702e0e-7cd3-4592-8343-13bddf80cbc5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hh9rd" [97702e0e-7cd3-4592-8343-13bddf80cbc5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004423231s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-691540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-691540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-691540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (50.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-691540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1105 18:53:19.170944  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/no-preload-576633/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-691540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (50.402002106s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (50.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-691540 "pgrep -a kubelet"
I1105 18:53:27.548911  285188 config.go:182] Loaded profile config "custom-flannel-691540": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-691540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4skdh" [c836f0db-4a5a-4c93-9784-09875d10bf2b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4skdh" [c836f0db-4a5a-4c93-9784-09875d10bf2b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.003764537s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-691540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-691540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-691540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-77fcf" [ef45f2d7-199a-4b4c-8e09-c0084926f86a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.008127161s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (48.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-691540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1105 18:54:05.677672  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/old-k8s-version-187534/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-691540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (48.829385341s)
--- PASS: TestNetworkPlugins/group/bridge/Start (48.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-691540 "pgrep -a kubelet"
I1105 18:54:08.338161  285188 config.go:182] Loaded profile config "kindnet-691540": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-691540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-968b9" [e74b6aab-2b00-431b-9ab2-fcfedfe9e6d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-968b9" [e74b6aab-2b00-431b-9ab2-fcfedfe9e6d2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003389731s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-691540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-691540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-691540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (74.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-691540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-691540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m14.691772559s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (74.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-691540 "pgrep -a kubelet"
I1105 18:54:52.078845  285188 config.go:182] Loaded profile config "bridge-691540": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-691540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-79gdz" [eba4d00e-5c74-4487-9e70-9749dccc8f45] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1105 18:54:53.688540  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/functional-762187/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-79gdz" [eba4d00e-5c74-4487-9e70-9749dccc8f45] Running
E1105 18:55:03.273440  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/auto-691540/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005554014s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-691540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-691540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-691540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-691540 "pgrep -a kubelet"
I1105 18:56:00.246683  285188 config.go:182] Loaded profile config "enable-default-cni-691540": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-691540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c7lv2" [615cbf46-1347-4056-8c14-a68329da2528] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1105 18:56:03.012213  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/no-preload-576633/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-c7lv2" [615cbf46-1347-4056-8c14-a68329da2528] Running
E1105 18:56:08.209551  285188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/flannel-691540/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004006649s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-691540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-691540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-691540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    

Test skip (31/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.52s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-346323 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-346323" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-346323
--- SKIP: TestDownloadOnlyKic (0.52s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-638421 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-946009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-946009
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-691540 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-691540

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-691540

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-691540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-691540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-691540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-691540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-691540

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-691540

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-691540

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-691540

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-691540

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-691540" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-691540" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19910-279806/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 05 Nov 2024 18:34:34 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-997752
contexts:
- context:
cluster: kubernetes-upgrade-997752
extensions:
- extension:
last-update: Tue, 05 Nov 2024 18:34:34 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: kubernetes-upgrade-997752
name: kubernetes-upgrade-997752
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-997752
user:
client-certificate: /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/kubernetes-upgrade-997752/client.crt
client-key: /home/jenkins/minikube-integration/19910-279806/.minikube/profiles/kubernetes-upgrade-997752/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-691540

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-691540"

                                                
                                                
----------------------- debugLogs end: kubenet-691540 [took: 3.70104747s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-691540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-691540
--- SKIP: TestNetworkPlugins/group/kubenet (3.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-691540 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-691540

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-691540

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-691540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-691540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-691540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-691540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-691540

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-691540

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-691540

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-691540

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-691540

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-691540" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-691540

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-691540

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-691540

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-691540

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-691540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-691540" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-691540

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-691540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-691540"

                                                
                                                
----------------------- debugLogs end: cilium-691540 [took: 5.890998201s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-691540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-691540
--- SKIP: TestNetworkPlugins/group/cilium (6.15s)

                                                
                                    
Copied to clipboard