Test Report: Docker_Linux_crio_arm64 18424

                    
                      1ff1985e433cf64121c1d5b23135320107f58df6:2024-10-07:36542
                    
                

Test fail (4/328)

Order failed test Duration
32 TestAddons/serial/GCPAuth/PullSecret 480.86
35 TestAddons/parallel/Ingress 153.13
37 TestAddons/parallel/MetricsServer 340.6
174 TestMultiControlPlane/serial/RestartCluster 128.66
x
+
TestAddons/serial/GCPAuth/PullSecret (480.86s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-779469 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-779469 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e7bd7f44-62b8-425f-9bfb-1f321b892bd7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:329: TestAddons/serial/GCPAuth/PullSecret: WARNING: pod list for "default" "integration-test=busybox" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:627: ***** TestAddons/serial/GCPAuth/PullSecret: pod "integration-test=busybox" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:627: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-779469 -n addons-779469
addons_test.go:627: TestAddons/serial/GCPAuth/PullSecret: showing logs for failed pods as of 2024-10-07 13:15:14.18187703 +0000 UTC m=+689.773231403
addons_test.go:627: (dbg) Run:  kubectl --context addons-779469 describe po busybox -n default
addons_test.go:627: (dbg) kubectl --context addons-779469 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-779469/192.168.49.2
Start Time:       Mon, 07 Oct 2024 13:07:13 +0000
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               10.244.0.21
IPs:
IP:  10.244.0.21
Containers:
busybox:
Container ID:  
Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sleep
3600
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n948t (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-n948t:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  8m1s                 default-scheduler  Successfully assigned default/busybox to addons-779469
Normal   Pulling    6m40s (x4 over 8m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning  Failed     6m40s (x4 over 8m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
Warning  Failed     6m40s (x4 over 8m)   kubelet            Error: ErrImagePull
Warning  Failed     6m14s (x6 over 8m)   kubelet            Error: ImagePullBackOff
Normal   BackOff    2m54s (x20 over 8m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
addons_test.go:627: (dbg) Run:  kubectl --context addons-779469 logs busybox -n default
addons_test.go:627: (dbg) Non-zero exit: kubectl --context addons-779469 logs busybox -n default: exit status 1 (108.061357ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:627: kubectl --context addons-779469 logs busybox -n default: exit status 1
addons_test.go:629: wait: integration-test=busybox within 8m0s: context deadline exceeded
--- FAIL: TestAddons/serial/GCPAuth/PullSecret (480.86s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (153.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-779469 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-779469 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-779469 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3598c900-244e-4281-8c80-2ad97162c82a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3598c900-244e-4281-8c80-2ad97162c82a] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003362379s
I1007 13:17:26.488419 1694126 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-779469 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-779469 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.709710256s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-779469 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-779469 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-779469
helpers_test.go:235: (dbg) docker inspect addons-779469:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "af34deb52be076fb7ac68abb938793931472379dd5995e00397ab399714f2ba6",
	        "Created": "2024-10-07T13:04:23.307101975Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1695380,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-07T13:04:23.458809165Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/af34deb52be076fb7ac68abb938793931472379dd5995e00397ab399714f2ba6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/af34deb52be076fb7ac68abb938793931472379dd5995e00397ab399714f2ba6/hostname",
	        "HostsPath": "/var/lib/docker/containers/af34deb52be076fb7ac68abb938793931472379dd5995e00397ab399714f2ba6/hosts",
	        "LogPath": "/var/lib/docker/containers/af34deb52be076fb7ac68abb938793931472379dd5995e00397ab399714f2ba6/af34deb52be076fb7ac68abb938793931472379dd5995e00397ab399714f2ba6-json.log",
	        "Name": "/addons-779469",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-779469:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-779469",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c32aa627b8409b0875544f84c1089059aa0cd4f7097ccf2d6b61621994b0f35b-init/diff:/var/lib/docker/overlay2/ba883e93760810ee908affcdb026e83ee6095990c52f4c87c201773cc7ffeb3e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c32aa627b8409b0875544f84c1089059aa0cd4f7097ccf2d6b61621994b0f35b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c32aa627b8409b0875544f84c1089059aa0cd4f7097ccf2d6b61621994b0f35b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c32aa627b8409b0875544f84c1089059aa0cd4f7097ccf2d6b61621994b0f35b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-779469",
	                "Source": "/var/lib/docker/volumes/addons-779469/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-779469",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-779469",
	                "name.minikube.sigs.k8s.io": "addons-779469",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c79b5bc4d521014ff2c5e3df210959d3649b4aabd99b5264e82c5bf5ec5e97e7",
	            "SandboxKey": "/var/run/docker/netns/c79b5bc4d521",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38266"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38267"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38270"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38268"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38269"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-779469": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "6c467b33615e9b23694556cf703c67534d9664704d2d9881f48bf748b99e88c5",
	                    "EndpointID": "87bad789c084c926cda9e505f87a4b6890b436c51a7024d8284faaa41d5f2b8d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-779469",
	                        "af34deb52be0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-779469 -n addons-779469
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-779469 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-779469 logs -n 25: (1.530018307s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-755816                                                                     | download-only-755816   | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC | 07 Oct 24 13:03 UTC |
	| delete  | -p download-only-521885                                                                     | download-only-521885   | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC | 07 Oct 24 13:03 UTC |
	| start   | --download-only -p                                                                          | download-docker-951215 | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC |                     |
	|         | download-docker-951215                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-951215                                                                   | download-docker-951215 | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC | 07 Oct 24 13:03 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-354806   | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC |                     |
	|         | binary-mirror-354806                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38505                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-354806                                                                     | binary-mirror-354806   | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC | 07 Oct 24 13:03 UTC |
	| addons  | enable dashboard -p                                                                         | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC |                     |
	|         | addons-779469                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC |                     |
	|         | addons-779469                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-779469 --wait=true                                                                | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC | 07 Oct 24 13:07 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-779469 addons disable                                                                | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:07 UTC | 07 Oct 24 13:07 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-779469 addons disable                                                                | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:15 UTC | 07 Oct 24 13:15 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:15 UTC | 07 Oct 24 13:15 UTC |
	|         | -p addons-779469                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-779469 addons disable                                                                | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:15 UTC | 07 Oct 24 13:15 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-779469 ip                                                                            | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:15 UTC | 07 Oct 24 13:15 UTC |
	| addons  | addons-779469 addons disable                                                                | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:15 UTC | 07 Oct 24 13:15 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-779469 addons disable                                                                | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:15 UTC | 07 Oct 24 13:15 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:15 UTC | 07 Oct 24 13:15 UTC |
	|         | -p addons-779469                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-779469 ssh cat                                                                       | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:15 UTC | 07 Oct 24 13:15 UTC |
	|         | /opt/local-path-provisioner/pvc-ef2e515d-a253-470e-a4c5-ae9b384f01de_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-779469 addons disable                                                                | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:15 UTC | 07 Oct 24 13:15 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-779469 addons                                                                        | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:16 UTC | 07 Oct 24 13:16 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-779469 addons                                                                        | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:16 UTC | 07 Oct 24 13:16 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-779469 addons                                                                        | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:16 UTC | 07 Oct 24 13:17 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-779469 addons                                                                        | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:17 UTC | 07 Oct 24 13:17 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-779469 ssh curl -s                                                                   | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:17 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-779469 ip                                                                            | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:19 UTC | 07 Oct 24 13:19 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 13:03:58
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 13:03:58.590991 1694879 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:03:58.591152 1694879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:03:58.591178 1694879 out.go:358] Setting ErrFile to fd 2...
	I1007 13:03:58.591198 1694879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:03:58.591461 1694879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-1688750/.minikube/bin
	I1007 13:03:58.591960 1694879 out.go:352] Setting JSON to false
	I1007 13:03:58.592858 1694879 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":96390,"bootTime":1728209849,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1007 13:03:58.592934 1694879 start.go:139] virtualization:  
	I1007 13:03:58.595954 1694879 out.go:177] * [addons-779469] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 13:03:58.598496 1694879 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:03:58.598537 1694879 notify.go:220] Checking for updates...
	I1007 13:03:58.601961 1694879 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:03:58.604140 1694879 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-1688750/kubeconfig
	I1007 13:03:58.606741 1694879 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-1688750/.minikube
	I1007 13:03:58.608845 1694879 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 13:03:58.610518 1694879 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:03:58.612435 1694879 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:03:58.639675 1694879 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 13:03:58.639810 1694879 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:03:58.692800 1694879 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-07 13:03:58.683328708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:03:58.692924 1694879 docker.go:318] overlay module found
	I1007 13:03:58.695118 1694879 out.go:177] * Using the docker driver based on user configuration
	I1007 13:03:58.696768 1694879 start.go:297] selected driver: docker
	I1007 13:03:58.696786 1694879 start.go:901] validating driver "docker" against <nil>
	I1007 13:03:58.696801 1694879 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:03:58.697423 1694879 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:03:58.750179 1694879 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-07 13:03:58.740210496 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:03:58.750395 1694879 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 13:03:58.750629 1694879 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:03:58.752278 1694879 out.go:177] * Using Docker driver with root privileges
	I1007 13:03:58.753840 1694879 cni.go:84] Creating CNI manager for ""
	I1007 13:03:58.753910 1694879 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 13:03:58.753924 1694879 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 13:03:58.754011 1694879 start.go:340] cluster config:
	{Name:addons-779469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-779469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:03:58.755978 1694879 out.go:177] * Starting "addons-779469" primary control-plane node in "addons-779469" cluster
	I1007 13:03:58.757363 1694879 cache.go:121] Beginning downloading kic base image for docker with crio
	I1007 13:03:58.758550 1694879 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 13:03:58.759864 1694879 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:03:58.759918 1694879 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-1688750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1007 13:03:58.759926 1694879 cache.go:56] Caching tarball of preloaded images
	I1007 13:03:58.760010 1694879 preload.go:172] Found /home/jenkins/minikube-integration/18424-1688750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1007 13:03:58.760020 1694879 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 13:03:58.760356 1694879 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/config.json ...
	I1007 13:03:58.760376 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/config.json: {Name:mkadf868b80152a3a366ce24c34abe79891c74a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:03:58.760458 1694879 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 13:03:58.774541 1694879 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1007 13:03:58.774674 1694879 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1007 13:03:58.774700 1694879 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1007 13:03:58.774706 1694879 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1007 13:03:58.774713 1694879 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1007 13:03:58.774719 1694879 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from local cache
	I1007 13:04:15.947604 1694879 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from cached tarball
	I1007 13:04:15.947642 1694879 cache.go:194] Successfully downloaded all kic artifacts
	I1007 13:04:15.947685 1694879 start.go:360] acquireMachinesLock for addons-779469: {Name:mkf6a3f1a5f9f020586f81ac1ba0c0c9f942937c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:04:15.947820 1694879 start.go:364] duration metric: took 107.649µs to acquireMachinesLock for "addons-779469"
	I1007 13:04:15.947850 1694879 start.go:93] Provisioning new machine with config: &{Name:addons-779469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-779469 Namespace:default APIServerHAVIP: APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:04:15.947925 1694879 start.go:125] createHost starting for "" (driver="docker")
	I1007 13:04:15.949634 1694879 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1007 13:04:15.949885 1694879 start.go:159] libmachine.API.Create for "addons-779469" (driver="docker")
	I1007 13:04:15.949923 1694879 client.go:168] LocalClient.Create starting
	I1007 13:04:15.950045 1694879 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem
	I1007 13:04:16.110919 1694879 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/cert.pem
	I1007 13:04:17.570128 1694879 cli_runner.go:164] Run: docker network inspect addons-779469 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1007 13:04:17.589460 1694879 cli_runner.go:211] docker network inspect addons-779469 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1007 13:04:17.589555 1694879 network_create.go:284] running [docker network inspect addons-779469] to gather additional debugging logs...
	I1007 13:04:17.589575 1694879 cli_runner.go:164] Run: docker network inspect addons-779469
	W1007 13:04:17.604775 1694879 cli_runner.go:211] docker network inspect addons-779469 returned with exit code 1
	I1007 13:04:17.604806 1694879 network_create.go:287] error running [docker network inspect addons-779469]: docker network inspect addons-779469: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-779469 not found
	I1007 13:04:17.604820 1694879 network_create.go:289] output of [docker network inspect addons-779469]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-779469 not found
	
	** /stderr **
	I1007 13:04:17.604921 1694879 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 13:04:17.620548 1694879 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40006a5d80}
	I1007 13:04:17.620593 1694879 network_create.go:124] attempt to create docker network addons-779469 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1007 13:04:17.620658 1694879 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-779469 addons-779469
	I1007 13:04:17.690204 1694879 network_create.go:108] docker network addons-779469 192.168.49.0/24 created
	I1007 13:04:17.690243 1694879 kic.go:121] calculated static IP "192.168.49.2" for the "addons-779469" container
	I1007 13:04:17.690321 1694879 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1007 13:04:17.705519 1694879 cli_runner.go:164] Run: docker volume create addons-779469 --label name.minikube.sigs.k8s.io=addons-779469 --label created_by.minikube.sigs.k8s.io=true
	I1007 13:04:17.723148 1694879 oci.go:103] Successfully created a docker volume addons-779469
	I1007 13:04:17.723250 1694879 cli_runner.go:164] Run: docker run --rm --name addons-779469-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-779469 --entrypoint /usr/bin/test -v addons-779469:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib
	I1007 13:04:19.245037 1694879 cli_runner.go:217] Completed: docker run --rm --name addons-779469-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-779469 --entrypoint /usr/bin/test -v addons-779469:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib: (1.521744864s)
	I1007 13:04:19.245065 1694879 oci.go:107] Successfully prepared a docker volume addons-779469
	I1007 13:04:19.245090 1694879 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:04:19.245112 1694879 kic.go:194] Starting extracting preloaded images to volume ...
	I1007 13:04:19.245178 1694879 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18424-1688750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-779469:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir
	I1007 13:04:23.243433 1694879 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18424-1688750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-779469:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir: (3.998215142s)
	I1007 13:04:23.243472 1694879 kic.go:203] duration metric: took 3.998355652s to extract preloaded images to volume ...
	W1007 13:04:23.243637 1694879 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1007 13:04:23.243761 1694879 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1007 13:04:23.292652 1694879 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-779469 --name addons-779469 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-779469 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-779469 --network addons-779469 --ip 192.168.49.2 --volume addons-779469:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122
	I1007 13:04:23.632644 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Running}}
	I1007 13:04:23.660183 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:23.686759 1694879 cli_runner.go:164] Run: docker exec addons-779469 stat /var/lib/dpkg/alternatives/iptables
	I1007 13:04:23.761450 1694879 oci.go:144] the created container "addons-779469" has a running status.
	I1007 13:04:23.761487 1694879 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa...
	I1007 13:04:24.130726 1694879 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1007 13:04:24.160447 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:24.187373 1694879 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1007 13:04:24.187400 1694879 kic_runner.go:114] Args: [docker exec --privileged addons-779469 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1007 13:04:24.275771 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:24.297104 1694879 machine.go:93] provisionDockerMachine start ...
	I1007 13:04:24.297203 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:24.319107 1694879 main.go:141] libmachine: Using SSH client type: native
	I1007 13:04:24.319381 1694879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38266 <nil> <nil>}
	I1007 13:04:24.319391 1694879 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:04:24.492365 1694879 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-779469
	
	I1007 13:04:24.492393 1694879 ubuntu.go:169] provisioning hostname "addons-779469"
	I1007 13:04:24.492467 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:24.514646 1694879 main.go:141] libmachine: Using SSH client type: native
	I1007 13:04:24.514881 1694879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38266 <nil> <nil>}
	I1007 13:04:24.514898 1694879 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-779469 && echo "addons-779469" | sudo tee /etc/hostname
	I1007 13:04:24.667369 1694879 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-779469
	
	I1007 13:04:24.667483 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:24.692536 1694879 main.go:141] libmachine: Using SSH client type: native
	I1007 13:04:24.692771 1694879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38266 <nil> <nil>}
	I1007 13:04:24.692788 1694879 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-779469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-779469/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-779469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:04:24.832129 1694879 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:04:24.832152 1694879 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18424-1688750/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-1688750/.minikube}
	I1007 13:04:24.832179 1694879 ubuntu.go:177] setting up certificates
	I1007 13:04:24.832194 1694879 provision.go:84] configureAuth start
	I1007 13:04:24.832256 1694879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-779469
	I1007 13:04:24.852945 1694879 provision.go:143] copyHostCerts
	I1007 13:04:24.853027 1694879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.pem (1078 bytes)
	I1007 13:04:24.853179 1694879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-1688750/.minikube/cert.pem (1123 bytes)
	I1007 13:04:24.853237 1694879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-1688750/.minikube/key.pem (1679 bytes)
	I1007 13:04:24.853289 1694879 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca-key.pem org=jenkins.addons-779469 san=[127.0.0.1 192.168.49.2 addons-779469 localhost minikube]
	I1007 13:04:25.022336 1694879 provision.go:177] copyRemoteCerts
	I1007 13:04:25.022413 1694879 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:04:25.022459 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:25.039356 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:25.137146 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1007 13:04:25.163503 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 13:04:25.189557 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 13:04:25.213561 1694879 provision.go:87] duration metric: took 381.353185ms to configureAuth
	I1007 13:04:25.213592 1694879 ubuntu.go:193] setting minikube options for container-runtime
	I1007 13:04:25.213794 1694879 config.go:182] Loaded profile config "addons-779469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:04:25.213899 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:25.231245 1694879 main.go:141] libmachine: Using SSH client type: native
	I1007 13:04:25.231492 1694879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38266 <nil> <nil>}
	I1007 13:04:25.231515 1694879 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:04:25.469435 1694879 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:04:25.469456 1694879 machine.go:96] duration metric: took 1.172331015s to provisionDockerMachine
	I1007 13:04:25.469466 1694879 client.go:171] duration metric: took 9.519533983s to LocalClient.Create
	I1007 13:04:25.469485 1694879 start.go:167] duration metric: took 9.519602642s to libmachine.API.Create "addons-779469"
	I1007 13:04:25.469493 1694879 start.go:293] postStartSetup for "addons-779469" (driver="docker")
	I1007 13:04:25.469505 1694879 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:04:25.469568 1694879 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:04:25.469618 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:25.488283 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:25.584813 1694879 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:04:25.588088 1694879 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 13:04:25.588126 1694879 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 13:04:25.588138 1694879 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 13:04:25.588145 1694879 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 13:04:25.588157 1694879 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-1688750/.minikube/addons for local assets ...
	I1007 13:04:25.588230 1694879 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-1688750/.minikube/files for local assets ...
	I1007 13:04:25.588255 1694879 start.go:296] duration metric: took 118.757032ms for postStartSetup
	I1007 13:04:25.588573 1694879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-779469
	I1007 13:04:25.606306 1694879 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/config.json ...
	I1007 13:04:25.606599 1694879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 13:04:25.606660 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:25.624497 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:25.716445 1694879 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 13:04:25.721031 1694879 start.go:128] duration metric: took 9.773089629s to createHost
	I1007 13:04:25.721056 1694879 start.go:83] releasing machines lock for "addons-779469", held for 9.773224469s
	I1007 13:04:25.721126 1694879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-779469
	I1007 13:04:25.737691 1694879 ssh_runner.go:195] Run: cat /version.json
	I1007 13:04:25.737747 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:25.737992 1694879 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:04:25.738066 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:25.760615 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:25.767638 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:25.983413 1694879 ssh_runner.go:195] Run: systemctl --version
	I1007 13:04:25.987517 1694879 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:04:26.129013 1694879 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 13:04:26.133366 1694879 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:04:26.153582 1694879 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1007 13:04:26.153721 1694879 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:04:26.186914 1694879 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1007 13:04:26.186936 1694879 start.go:495] detecting cgroup driver to use...
	I1007 13:04:26.186968 1694879 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 13:04:26.187020 1694879 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:04:26.203920 1694879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:04:26.214902 1694879 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:04:26.215006 1694879 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:04:26.229011 1694879 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:04:26.243502 1694879 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:04:26.330579 1694879 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:04:26.430502 1694879 docker.go:233] disabling docker service ...
	I1007 13:04:26.430609 1694879 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:04:26.450474 1694879 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:04:26.462448 1694879 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:04:26.553050 1694879 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:04:26.645854 1694879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:04:26.656539 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:04:26.673215 1694879 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:04:26.673280 1694879 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:04:26.687625 1694879 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:04:26.687692 1694879 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:04:26.697653 1694879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:04:26.707037 1694879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:04:26.716576 1694879 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:04:26.725855 1694879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:04:26.735253 1694879 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:04:26.751096 1694879 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:04:26.761047 1694879 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:04:26.769851 1694879 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:04:26.778394 1694879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:04:26.857036 1694879 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:04:26.968429 1694879 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:04:26.968538 1694879 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:04:26.972057 1694879 start.go:563] Will wait 60s for crictl version
	I1007 13:04:26.972121 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:04:26.975460 1694879 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:04:27.014942 1694879 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1007 13:04:27.015067 1694879 ssh_runner.go:195] Run: crio --version
	I1007 13:04:27.053702 1694879 ssh_runner.go:195] Run: crio --version
	I1007 13:04:27.098344 1694879 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1007 13:04:27.101084 1694879 cli_runner.go:164] Run: docker network inspect addons-779469 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 13:04:27.116339 1694879 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1007 13:04:27.119972 1694879 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:04:27.130894 1694879 kubeadm.go:883] updating cluster {Name:addons-779469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-779469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:04:27.131024 1694879 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:04:27.131083 1694879 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:04:27.206230 1694879 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:04:27.206251 1694879 crio.go:433] Images already preloaded, skipping extraction
	I1007 13:04:27.206308 1694879 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:04:27.244345 1694879 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:04:27.244369 1694879 cache_images.go:84] Images are preloaded, skipping loading
	I1007 13:04:27.244379 1694879 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I1007 13:04:27.244466 1694879 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-779469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-779469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:04:27.244554 1694879 ssh_runner.go:195] Run: crio config
	I1007 13:04:27.318963 1694879 cni.go:84] Creating CNI manager for ""
	I1007 13:04:27.319032 1694879 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 13:04:27.319056 1694879 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:04:27.319105 1694879 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-779469 NodeName:addons-779469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 13:04:27.319281 1694879 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-779469"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:04:27.319370 1694879 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:04:27.328356 1694879 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:04:27.328448 1694879 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:04:27.337493 1694879 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1007 13:04:27.357221 1694879 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:04:27.376150 1694879 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I1007 13:04:27.394117 1694879 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1007 13:04:27.397643 1694879 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:04:27.408542 1694879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:04:27.502146 1694879 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:04:27.516652 1694879 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469 for IP: 192.168.49.2
	I1007 13:04:27.516688 1694879 certs.go:194] generating shared ca certs ...
	I1007 13:04:27.516705 1694879 certs.go:226] acquiring lock for ca certs: {Name:mk3a082a64706c071bb4db632f3ec05c7c14e01c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:27.516862 1694879 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.key
	I1007 13:04:27.924465 1694879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt ...
	I1007 13:04:27.924499 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt: {Name:mk0870e61242f9fe806e59e090e40476885a4ab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:27.925223 1694879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.key ...
	I1007 13:04:27.925239 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.key: {Name:mk3a5f0507ac2ca23a463229c2fb9e6c7860bcf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:27.925770 1694879 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.key
	I1007 13:04:28.348730 1694879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.crt ...
	I1007 13:04:28.348767 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.crt: {Name:mk90cbb5a99d3b72d5722f5c1e82e601a619dd18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:28.349450 1694879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.key ...
	I1007 13:04:28.349468 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.key: {Name:mk17258ccf583bd5881068f5e4a136c22883f9c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:28.349561 1694879 certs.go:256] generating profile certs ...
	I1007 13:04:28.349624 1694879 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.key
	I1007 13:04:28.349651 1694879 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt with IP's: []
	I1007 13:04:28.716095 1694879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt ...
	I1007 13:04:28.716127 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: {Name:mke8035127e1a22111a029f870eb1cb4e1bed430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:28.716333 1694879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.key ...
	I1007 13:04:28.716347 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.key: {Name:mk388e6481fa92945a975aa0160fe88892b596ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:28.716442 1694879 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.key.72e8297d
	I1007 13:04:28.716464 1694879 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.crt.72e8297d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1007 13:04:28.936226 1694879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.crt.72e8297d ...
	I1007 13:04:28.936258 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.crt.72e8297d: {Name:mkc10581cb757e3538060d902f3ecb30de78eabb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:28.937016 1694879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.key.72e8297d ...
	I1007 13:04:28.937039 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.key.72e8297d: {Name:mk537dca66b298768d37bc7187b56749a9900f90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:28.937145 1694879 certs.go:381] copying /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.crt.72e8297d -> /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.crt
	I1007 13:04:28.937224 1694879 certs.go:385] copying /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.key.72e8297d -> /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.key
	I1007 13:04:28.937286 1694879 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/proxy-client.key
	I1007 13:04:28.937308 1694879 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/proxy-client.crt with IP's: []
	I1007 13:04:29.483883 1694879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/proxy-client.crt ...
	I1007 13:04:29.483915 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/proxy-client.crt: {Name:mkbafe7318ae053f255591f295b86bd3887ed668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:29.484647 1694879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/proxy-client.key ...
	I1007 13:04:29.484665 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/proxy-client.key: {Name:mkd0b891d7056adc4eeb3f9fd4497e3c47643866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:29.484898 1694879 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 13:04:29.484941 1694879 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem (1078 bytes)
	I1007 13:04:29.484973 1694879 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:04:29.485002 1694879 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/key.pem (1679 bytes)
	I1007 13:04:29.485683 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:04:29.511317 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:04:29.534975 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:04:29.559430 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 13:04:29.583286 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 13:04:29.607102 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 13:04:29.631206 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:04:29.655040 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 13:04:29.687871 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:04:29.717204 1694879 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:04:29.747436 1694879 ssh_runner.go:195] Run: openssl version
	I1007 13:04:29.753907 1694879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:04:29.764054 1694879 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:04:29.768403 1694879 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 13:04 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:04:29.768466 1694879 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:04:29.777192 1694879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:04:29.789006 1694879 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:04:29.792759 1694879 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 13:04:29.792822 1694879 kubeadm.go:392] StartCluster: {Name:addons-779469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-779469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:04:29.792917 1694879 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:04:29.793013 1694879 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:04:29.833084 1694879 cri.go:89] found id: ""
	I1007 13:04:29.833166 1694879 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:04:29.841940 1694879 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:04:29.850762 1694879 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1007 13:04:29.850876 1694879 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:04:29.859936 1694879 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:04:29.859952 1694879 kubeadm.go:157] found existing configuration files:
	
	I1007 13:04:29.860003 1694879 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:04:29.869105 1694879 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:04:29.869171 1694879 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:04:29.877991 1694879 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:04:29.886615 1694879 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:04:29.886699 1694879 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:04:29.895220 1694879 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:04:29.904408 1694879 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:04:29.904498 1694879 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:04:29.913216 1694879 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:04:29.922011 1694879 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:04:29.922077 1694879 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:04:29.932551 1694879 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1007 13:04:29.975370 1694879 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 13:04:29.975447 1694879 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:04:29.995604 1694879 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1007 13:04:29.995745 1694879 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1007 13:04:29.995807 1694879 kubeadm.go:310] OS: Linux
	I1007 13:04:29.995884 1694879 kubeadm.go:310] CGROUPS_CPU: enabled
	I1007 13:04:29.995952 1694879 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1007 13:04:29.996024 1694879 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1007 13:04:29.996094 1694879 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1007 13:04:29.996183 1694879 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1007 13:04:29.996257 1694879 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1007 13:04:29.996335 1694879 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1007 13:04:29.996401 1694879 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1007 13:04:29.996474 1694879 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1007 13:04:30.078699 1694879 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:04:30.078844 1694879 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:04:30.078956 1694879 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 13:04:30.087013 1694879 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:04:30.090786 1694879 out.go:235]   - Generating certificates and keys ...
	I1007 13:04:30.090967 1694879 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:04:30.091050 1694879 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:04:30.834750 1694879 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 13:04:31.620888 1694879 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 13:04:31.762991 1694879 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 13:04:32.252935 1694879 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 13:04:33.355774 1694879 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 13:04:33.355978 1694879 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-779469 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1007 13:04:33.719632 1694879 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 13:04:33.719835 1694879 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-779469 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1007 13:04:34.331388 1694879 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 13:04:34.756092 1694879 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 13:04:35.127458 1694879 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 13:04:35.127819 1694879 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:04:35.310105 1694879 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:04:35.888803 1694879 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:04:36.113311 1694879 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:04:36.243172 1694879 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:04:36.415322 1694879 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:04:36.416048 1694879 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:04:36.419031 1694879 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:04:36.422254 1694879 out.go:235]   - Booting up control plane ...
	I1007 13:04:36.422355 1694879 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:04:36.422432 1694879 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:04:36.425898 1694879 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:04:36.441216 1694879 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:04:36.447158 1694879 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:04:36.447222 1694879 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:04:36.545968 1694879 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:04:36.546094 1694879 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:04:38.046754 1694879 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.500829679s
	I1007 13:04:38.046843 1694879 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:04:44.048664 1694879 kubeadm.go:310] [api-check] The API server is healthy after 6.001968403s
	I1007 13:04:44.069713 1694879 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:04:44.086252 1694879 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:04:44.113930 1694879 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:04:44.114186 1694879 kubeadm.go:310] [mark-control-plane] Marking the node addons-779469 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:04:44.125912 1694879 kubeadm.go:310] [bootstrap-token] Using token: 61fkzd.5r98z9kc930n9kup
	I1007 13:04:44.130342 1694879 out.go:235]   - Configuring RBAC rules ...
	I1007 13:04:44.130474 1694879 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:04:44.133629 1694879 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:04:44.142573 1694879 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:04:44.148930 1694879 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:04:44.153305 1694879 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:04:44.158316 1694879 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:04:44.457738 1694879 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:04:44.898676 1694879 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:04:45.457304 1694879 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:04:45.457329 1694879 kubeadm.go:310] 
	I1007 13:04:45.457392 1694879 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:04:45.457405 1694879 kubeadm.go:310] 
	I1007 13:04:45.457482 1694879 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:04:45.457490 1694879 kubeadm.go:310] 
	I1007 13:04:45.457515 1694879 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:04:45.457576 1694879 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:04:45.457633 1694879 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:04:45.457641 1694879 kubeadm.go:310] 
	I1007 13:04:45.457695 1694879 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:04:45.457703 1694879 kubeadm.go:310] 
	I1007 13:04:45.457750 1694879 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:04:45.457758 1694879 kubeadm.go:310] 
	I1007 13:04:45.457810 1694879 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:04:45.457888 1694879 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:04:45.457958 1694879 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:04:45.457967 1694879 kubeadm.go:310] 
	I1007 13:04:45.458051 1694879 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:04:45.458130 1694879 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:04:45.458140 1694879 kubeadm.go:310] 
	I1007 13:04:45.458225 1694879 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 61fkzd.5r98z9kc930n9kup \
	I1007 13:04:45.458330 1694879 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:659002c3c36ab0885bf81fe4258f61cead5b2d03fd8e3c7ecf684b765e0cd0b4 \
	I1007 13:04:45.458354 1694879 kubeadm.go:310] 	--control-plane 
	I1007 13:04:45.458361 1694879 kubeadm.go:310] 
	I1007 13:04:45.458445 1694879 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:04:45.458453 1694879 kubeadm.go:310] 
	I1007 13:04:45.458534 1694879 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 61fkzd.5r98z9kc930n9kup \
	I1007 13:04:45.458637 1694879 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:659002c3c36ab0885bf81fe4258f61cead5b2d03fd8e3c7ecf684b765e0cd0b4 
	I1007 13:04:45.461461 1694879 kubeadm.go:310] W1007 13:04:29.972119    1187 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:04:45.461763 1694879 kubeadm.go:310] W1007 13:04:29.972978    1187 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:04:45.461977 1694879 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1007 13:04:45.462088 1694879 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:04:45.462107 1694879 cni.go:84] Creating CNI manager for ""
	I1007 13:04:45.462116 1694879 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 13:04:45.466791 1694879 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 13:04:45.469531 1694879 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 13:04:45.473229 1694879 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 13:04:45.473247 1694879 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 13:04:45.490661 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 13:04:45.788501 1694879 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:04:45.788636 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:45.788720 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-779469 minikube.k8s.io/updated_at=2024_10_07T13_04_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=addons-779469 minikube.k8s.io/primary=true
	I1007 13:04:45.956045 1694879 ops.go:34] apiserver oom_adj: -16
	I1007 13:04:45.956169 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:46.456926 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:46.956422 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:47.456397 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:47.956755 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:48.456749 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:48.956473 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:49.456743 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:49.956766 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:50.456294 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:50.575420 1694879 kubeadm.go:1113] duration metric: took 4.786831402s to wait for elevateKubeSystemPrivileges
	I1007 13:04:50.575454 1694879 kubeadm.go:394] duration metric: took 20.782634659s to StartCluster
	I1007 13:04:50.575477 1694879 settings.go:142] acquiring lock: {Name:mkc4eef6ec2cbdb287b7d49da88f957f9ede0465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:50.575648 1694879 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-1688750/kubeconfig
	I1007 13:04:50.576043 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/kubeconfig: {Name:mkae782d6e0841d1e777fb7cb23057f0dd940052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:50.576777 1694879 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:04:50.576907 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 13:04:50.577150 1694879 config.go:182] Loaded profile config "addons-779469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:04:50.577179 1694879 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1007 13:04:50.577260 1694879 addons.go:69] Setting yakd=true in profile "addons-779469"
	I1007 13:04:50.577276 1694879 addons.go:234] Setting addon yakd=true in "addons-779469"
	I1007 13:04:50.577298 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.577802 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.578313 1694879 addons.go:69] Setting cloud-spanner=true in profile "addons-779469"
	I1007 13:04:50.578339 1694879 addons.go:234] Setting addon cloud-spanner=true in "addons-779469"
	I1007 13:04:50.578366 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.578387 1694879 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-779469"
	I1007 13:04:50.578404 1694879 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-779469"
	I1007 13:04:50.578428 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.578772 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.578832 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.582368 1694879 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-779469"
	I1007 13:04:50.582440 1694879 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-779469"
	I1007 13:04:50.582481 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.582976 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.583674 1694879 addons.go:69] Setting default-storageclass=true in profile "addons-779469"
	I1007 13:04:50.583712 1694879 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-779469"
	I1007 13:04:50.584105 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.600536 1694879 addons.go:69] Setting registry=true in profile "addons-779469"
	I1007 13:04:50.601090 1694879 addons.go:234] Setting addon registry=true in "addons-779469"
	I1007 13:04:50.601737 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.602806 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.603974 1694879 addons.go:69] Setting gcp-auth=true in profile "addons-779469"
	I1007 13:04:50.604051 1694879 mustload.go:65] Loading cluster: addons-779469
	I1007 13:04:50.604311 1694879 config.go:182] Loaded profile config "addons-779469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:04:50.604739 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.616393 1694879 addons.go:69] Setting ingress=true in profile "addons-779469"
	I1007 13:04:50.616431 1694879 addons.go:234] Setting addon ingress=true in "addons-779469"
	I1007 13:04:50.616473 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.616943 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.624183 1694879 addons.go:69] Setting storage-provisioner=true in profile "addons-779469"
	I1007 13:04:50.624420 1694879 addons.go:234] Setting addon storage-provisioner=true in "addons-779469"
	I1007 13:04:50.624602 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.625144 1694879 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-779469"
	I1007 13:04:50.625178 1694879 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-779469"
	I1007 13:04:50.625496 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.631700 1694879 addons.go:69] Setting ingress-dns=true in profile "addons-779469"
	I1007 13:04:50.631792 1694879 addons.go:234] Setting addon ingress-dns=true in "addons-779469"
	I1007 13:04:50.631886 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.632688 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.648949 1694879 addons.go:69] Setting volcano=true in profile "addons-779469"
	I1007 13:04:50.649033 1694879 addons.go:234] Setting addon volcano=true in "addons-779469"
	I1007 13:04:50.649099 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.649621 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.649872 1694879 addons.go:69] Setting inspektor-gadget=true in profile "addons-779469"
	I1007 13:04:50.649890 1694879 addons.go:234] Setting addon inspektor-gadget=true in "addons-779469"
	I1007 13:04:50.649916 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.650296 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.669521 1694879 out.go:177] * Verifying Kubernetes components...
	I1007 13:04:50.677498 1694879 addons.go:69] Setting metrics-server=true in profile "addons-779469"
	I1007 13:04:50.679567 1694879 addons.go:234] Setting addon metrics-server=true in "addons-779469"
	I1007 13:04:50.679652 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.677583 1694879 addons.go:69] Setting volumesnapshots=true in profile "addons-779469"
	I1007 13:04:50.695712 1694879 addons.go:234] Setting addon volumesnapshots=true in "addons-779469"
	I1007 13:04:50.695790 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.699229 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.705488 1694879 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1007 13:04:50.709477 1694879 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1007 13:04:50.709562 1694879 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1007 13:04:50.709697 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:50.716051 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.723004 1694879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:04:50.723713 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.731433 1694879 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1007 13:04:50.734879 1694879 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 13:04:50.734900 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1007 13:04:50.734981 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:50.758376 1694879 addons.go:234] Setting addon default-storageclass=true in "addons-779469"
	I1007 13:04:50.758443 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.758919 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.778112 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.789134 1694879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1007 13:04:50.799707 1694879 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1007 13:04:50.802329 1694879 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1007 13:04:50.809960 1694879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1007 13:04:50.818521 1694879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1007 13:04:50.821258 1694879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1007 13:04:50.821360 1694879 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1007 13:04:50.851327 1694879 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1007 13:04:50.857514 1694879 out.go:177]   - Using image docker.io/registry:2.8.3
	I1007 13:04:50.857676 1694879 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1007 13:04:50.861620 1694879 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 13:04:50.861696 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1007 13:04:50.861802 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:50.862122 1694879 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1007 13:04:50.862171 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1007 13:04:50.862248 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:50.894559 1694879 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 13:04:50.897688 1694879 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 13:04:50.911772 1694879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1007 13:04:50.912093 1694879 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1007 13:04:50.912116 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1007 13:04:50.912202 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	W1007 13:04:50.927909 1694879 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1007 13:04:50.940202 1694879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1007 13:04:50.942680 1694879 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:04:50.942706 1694879 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:04:50.942798 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:50.963342 1694879 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1007 13:04:50.963383 1694879 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1007 13:04:50.963477 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:50.968861 1694879 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-779469"
	I1007 13:04:50.968905 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.969302 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.980108 1694879 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1007 13:04:50.983437 1694879 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 13:04:50.983457 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1007 13:04:50.983544 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:51.018171 1694879 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:04:51.029307 1694879 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:04:51.029332 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:04:51.029402 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:51.033441 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 13:04:51.036533 1694879 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1007 13:04:51.036671 1694879 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1007 13:04:51.036725 1694879 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1007 13:04:51.040370 1694879 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1007 13:04:51.040401 1694879 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1007 13:04:51.040485 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:51.042393 1694879 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 13:04:51.042414 1694879 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 13:04:51.042499 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:51.058107 1694879 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1007 13:04:51.058133 1694879 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1007 13:04:51.058214 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:51.070405 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.091428 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.116909 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.121119 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.136475 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.143690 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.171153 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.191926 1694879 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:04:51.192580 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.200250 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.207590 1694879 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1007 13:04:51.210082 1694879 out.go:177]   - Using image docker.io/busybox:stable
	I1007 13:04:51.217688 1694879 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 13:04:51.217708 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1007 13:04:51.217773 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:51.219773 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.235310 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.252346 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.269474 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.551105 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:04:51.596821 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 13:04:51.612862 1694879 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1007 13:04:51.612902 1694879 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1007 13:04:51.650333 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1007 13:04:51.652448 1694879 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 13:04:51.652472 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1007 13:04:51.662977 1694879 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1007 13:04:51.663012 1694879 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1007 13:04:51.674748 1694879 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1007 13:04:51.674845 1694879 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1007 13:04:51.689748 1694879 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1007 13:04:51.689829 1694879 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1007 13:04:51.718154 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 13:04:51.736146 1694879 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1007 13:04:51.736237 1694879 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1007 13:04:51.769164 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:04:51.804360 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 13:04:51.808507 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 13:04:51.810761 1694879 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1007 13:04:51.810864 1694879 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1007 13:04:51.857076 1694879 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 13:04:51.857165 1694879 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 13:04:51.897651 1694879 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1007 13:04:51.897747 1694879 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1007 13:04:51.899612 1694879 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1007 13:04:51.899683 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1007 13:04:51.902111 1694879 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1007 13:04:51.902181 1694879 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1007 13:04:51.912897 1694879 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1007 13:04:51.912983 1694879 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1007 13:04:52.022406 1694879 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1007 13:04:52.022481 1694879 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1007 13:04:52.051861 1694879 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:04:52.052167 1694879 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 13:04:52.090936 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1007 13:04:52.094871 1694879 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1007 13:04:52.094968 1694879 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1007 13:04:52.100205 1694879 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1007 13:04:52.100300 1694879 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1007 13:04:52.153029 1694879 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1007 13:04:52.153135 1694879 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1007 13:04:52.227993 1694879 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1007 13:04:52.228014 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1007 13:04:52.246483 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:04:52.275619 1694879 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1007 13:04:52.275693 1694879 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1007 13:04:52.297677 1694879 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1007 13:04:52.297762 1694879 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1007 13:04:52.396928 1694879 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1007 13:04:52.397014 1694879 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1007 13:04:52.443141 1694879 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1007 13:04:52.443224 1694879 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1007 13:04:52.444498 1694879 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1007 13:04:52.444566 1694879 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1007 13:04:52.448461 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1007 13:04:52.579807 1694879 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1007 13:04:52.579900 1694879 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1007 13:04:52.604887 1694879 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 13:04:52.604979 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1007 13:04:52.648781 1694879 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1007 13:04:52.648860 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1007 13:04:52.750138 1694879 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1007 13:04:52.750226 1694879 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1007 13:04:52.763273 1694879 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1007 13:04:52.763362 1694879 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1007 13:04:52.768091 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 13:04:52.894127 1694879 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.702164099s)
	I1007 13:04:52.894491 1694879 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.861019923s)
	I1007 13:04:52.894548 1694879 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1007 13:04:52.896764 1694879 node_ready.go:35] waiting up to 6m0s for node "addons-779469" to be "Ready" ...
	I1007 13:04:52.897760 1694879 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 13:04:52.897819 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1007 13:04:52.916059 1694879 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1007 13:04:52.916082 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1007 13:04:53.081940 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 13:04:53.086619 1694879 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1007 13:04:53.086690 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1007 13:04:53.263939 1694879 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 13:04:53.264017 1694879 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1007 13:04:53.583254 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 13:04:53.771998 1694879 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-779469" context rescaled to 1 replicas
	I1007 13:04:55.059303 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:04:55.861779 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.3106273s)
	I1007 13:04:55.861863 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.265014398s)
	I1007 13:04:55.861901 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.211544932s)
	I1007 13:04:55.977253 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.259013872s)
	I1007 13:04:55.977352 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.208099549s)
	W1007 13:04:56.074591 1694879 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1007 13:04:57.029627 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.225173959s)
	I1007 13:04:57.029659 1694879 addons.go:475] Verifying addon ingress=true in "addons-779469"
	I1007 13:04:57.029729 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.22114876s)
	I1007 13:04:57.029791 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.938697602s)
	I1007 13:04:57.029807 1694879 addons.go:475] Verifying addon registry=true in "addons-779469"
	I1007 13:04:57.030337 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.783823572s)
	I1007 13:04:57.030369 1694879 addons.go:475] Verifying addon metrics-server=true in "addons-779469"
	I1007 13:04:57.030412 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.581864912s)
	I1007 13:04:57.030576 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.262412362s)
	W1007 13:04:57.030605 1694879 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 13:04:57.030626 1694879 retry.go:31] will retry after 270.153464ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 13:04:57.030693 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.948678612s)
	I1007 13:04:57.033755 1694879 out.go:177] * Verifying registry addon...
	I1007 13:04:57.033812 1694879 out.go:177] * Verifying ingress addon...
	I1007 13:04:57.035501 1694879 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-779469 service yakd-dashboard -n yakd-dashboard
	
	I1007 13:04:57.038150 1694879 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1007 13:04:57.039062 1694879 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1007 13:04:57.052949 1694879 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 13:04:57.052974 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:04:57.053134 1694879 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1007 13:04:57.053148 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:04:57.293569 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.710211474s)
	I1007 13:04:57.293603 1694879 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-779469"
	I1007 13:04:57.296464 1694879 out.go:177] * Verifying csi-hostpath-driver addon...
	I1007 13:04:57.299304 1694879 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1007 13:04:57.301679 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 13:04:57.306561 1694879 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 13:04:57.306656 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:04:57.399940 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:04:57.546877 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:04:57.549177 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:04:57.803788 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:04:58.044663 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:04:58.046885 1694879 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1007 13:04:58.047029 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:58.057670 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:04:58.070056 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:58.234274 1694879 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1007 13:04:58.298922 1694879 addons.go:234] Setting addon gcp-auth=true in "addons-779469"
	I1007 13:04:58.298975 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:58.299427 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:58.316559 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:04:58.335804 1694879 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1007 13:04:58.335864 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:58.367948 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:58.546563 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:04:58.550800 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:04:58.803470 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:04:59.043545 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:04:59.044561 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:04:59.303304 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:04:59.400979 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:04:59.543004 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:04:59.555867 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:04:59.808621 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:00.089609 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:00.090261 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:00.305001 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:00.439883 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.138140479s)
	I1007 13:05:00.439979 1694879 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.104152918s)
	I1007 13:05:00.445754 1694879 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 13:05:00.452068 1694879 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1007 13:05:00.458762 1694879 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1007 13:05:00.458808 1694879 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1007 13:05:00.487411 1694879 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1007 13:05:00.487435 1694879 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1007 13:05:00.513003 1694879 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 13:05:00.513032 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1007 13:05:00.538090 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 13:05:00.553514 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:00.553851 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:00.805732 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:01.049684 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:01.051159 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:01.289707 1694879 addons.go:475] Verifying addon gcp-auth=true in "addons-779469"
	I1007 13:05:01.292464 1694879 out.go:177] * Verifying gcp-auth addon...
	I1007 13:05:01.296779 1694879 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1007 13:05:01.300916 1694879 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1007 13:05:01.300943 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:01.304302 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:01.401066 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:01.558672 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:01.564211 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:01.804295 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:01.808768 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:02.045477 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:02.045570 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:02.300730 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:02.303396 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:02.544625 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:02.547625 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:02.800439 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:02.802754 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:03.041609 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:03.044206 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:03.300770 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:03.302774 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:03.544961 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:03.546048 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:03.802101 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:03.804462 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:03.900764 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:04.042962 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:04.043976 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:04.300224 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:04.302940 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:04.545207 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:04.545586 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:04.799761 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:04.802505 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:05.041277 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:05.042609 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:05.300102 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:05.302444 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:05.546275 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:05.547235 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:05.801632 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:05.803077 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:06.041715 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:06.043711 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:06.300402 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:06.303263 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:06.400272 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:06.545921 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:06.550776 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:06.801226 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:06.803686 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:07.041660 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:07.042642 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:07.301005 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:07.302884 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:07.545011 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:07.546138 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:07.802046 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:07.803737 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:08.041765 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:08.043648 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:08.300615 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:08.303260 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:08.401488 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:08.545479 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:08.547379 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:08.800483 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:08.802910 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:09.041258 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:09.043119 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:09.300571 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:09.302676 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:09.545872 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:09.546795 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:09.800317 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:09.803464 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:10.041525 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:10.043055 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:10.300609 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:10.302947 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:10.546291 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:10.546939 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:10.800326 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:10.802254 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:10.900446 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:11.042061 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:11.043208 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:11.300567 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:11.302590 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:11.544958 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:11.546206 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:11.800090 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:11.803214 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:12.042116 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:12.042786 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:12.300472 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:12.302652 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:12.545847 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:12.546388 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:12.800666 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:12.803106 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:13.042073 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:13.043314 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:13.300663 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:13.304158 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:13.400826 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:13.544786 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:13.546239 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:13.800707 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:13.802498 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:14.041873 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:14.042894 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:14.300834 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:14.303488 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:14.545572 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:14.545708 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:14.800447 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:14.803445 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:15.042987 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:15.043755 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:15.299740 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:15.302379 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:15.546191 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:15.547019 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:15.800260 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:15.802671 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:15.900975 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:16.042824 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:16.043284 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:16.299690 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:16.302218 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:16.544775 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:16.546819 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:16.800220 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:16.802514 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:17.041371 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:17.042985 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:17.300462 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:17.303228 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:17.545330 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:17.545990 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:17.800715 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:17.802955 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:18.041699 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:18.043524 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:18.300255 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:18.302338 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:18.400824 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:18.547350 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:18.550262 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:18.799977 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:18.802545 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:19.042294 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:19.042713 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:19.301200 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:19.303358 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:19.545903 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:19.546810 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:19.800833 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:19.806591 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:20.041832 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:20.043588 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:20.300227 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:20.302924 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:20.546114 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:20.546484 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:20.800217 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:20.802916 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:20.900506 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:21.042773 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:21.043265 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:21.300383 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:21.302739 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:21.544697 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:21.546972 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:21.800418 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:21.802595 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:22.041859 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:22.044794 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:22.300223 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:22.303560 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:22.546194 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:22.548664 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:22.800356 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:22.802402 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:22.900672 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:23.042427 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:23.043380 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:23.300817 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:23.302636 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:23.544901 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:23.545675 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:23.800437 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:23.802791 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:24.042253 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:24.043444 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:24.300415 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:24.302532 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:24.544683 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:24.546501 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:24.800001 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:24.802516 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:24.901010 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:25.042685 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:25.043323 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:25.300036 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:25.302795 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:25.546472 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:25.547309 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:25.802053 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:25.803100 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:26.042253 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:26.044231 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:26.301129 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:26.303452 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:26.546559 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:26.548096 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:26.801507 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:26.805121 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:27.041677 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:27.043158 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:27.303022 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:27.304814 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:27.401033 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:27.547139 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:27.548566 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:27.800311 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:27.802882 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:28.041729 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:28.043171 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:28.303384 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:28.303870 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:28.545579 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:28.548205 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:28.800670 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:28.803275 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:29.041510 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:29.042742 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:29.301038 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:29.303432 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:29.545896 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:29.546304 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:29.800492 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:29.802692 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:29.901381 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:30.045339 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:30.046153 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:30.300637 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:30.302779 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:30.545841 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:30.548586 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:30.800194 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:30.802282 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:31.042402 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:31.043344 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:31.315503 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:31.333430 1694879 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 13:05:31.333455 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:31.406624 1694879 node_ready.go:49] node "addons-779469" has status "Ready":"True"
	I1007 13:05:31.406647 1694879 node_ready.go:38] duration metric: took 38.509715733s for node "addons-779469" to be "Ready" ...
	I1007 13:05:31.406658 1694879 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:05:31.578364 1694879 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kfrdl" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:31.645584 1694879 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 13:05:31.645609 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:31.647176 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:31.826566 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:31.831785 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:32.069335 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:32.070172 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:32.303235 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:32.306024 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:32.545407 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:32.545883 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:32.584429 1694879 pod_ready.go:93] pod "coredns-7c65d6cfc9-kfrdl" in "kube-system" namespace has status "Ready":"True"
	I1007 13:05:32.584455 1694879 pod_ready.go:82] duration metric: took 1.006054269s for pod "coredns-7c65d6cfc9-kfrdl" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:32.584508 1694879 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-779469" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:32.590053 1694879 pod_ready.go:93] pod "etcd-addons-779469" in "kube-system" namespace has status "Ready":"True"
	I1007 13:05:32.590080 1694879 pod_ready.go:82] duration metric: took 5.556159ms for pod "etcd-addons-779469" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:32.590096 1694879 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-779469" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:32.595668 1694879 pod_ready.go:93] pod "kube-apiserver-addons-779469" in "kube-system" namespace has status "Ready":"True"
	I1007 13:05:32.595734 1694879 pod_ready.go:82] duration metric: took 5.602713ms for pod "kube-apiserver-addons-779469" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:32.595762 1694879 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-779469" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:32.600828 1694879 pod_ready.go:93] pod "kube-controller-manager-addons-779469" in "kube-system" namespace has status "Ready":"True"
	I1007 13:05:32.600855 1694879 pod_ready.go:82] duration metric: took 5.071927ms for pod "kube-controller-manager-addons-779469" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:32.600869 1694879 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6ncrf" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:32.606287 1694879 pod_ready.go:93] pod "kube-proxy-6ncrf" in "kube-system" namespace has status "Ready":"True"
	I1007 13:05:32.606315 1694879 pod_ready.go:82] duration metric: took 5.438582ms for pod "kube-proxy-6ncrf" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:32.606326 1694879 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-779469" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:32.801826 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:32.804788 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:33.010217 1694879 pod_ready.go:93] pod "kube-scheduler-addons-779469" in "kube-system" namespace has status "Ready":"True"
	I1007 13:05:33.010305 1694879 pod_ready.go:82] duration metric: took 403.938673ms for pod "kube-scheduler-addons-779469" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:33.010334 1694879 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:33.043957 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:33.044628 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:33.301019 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:33.304753 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:33.544039 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:33.546203 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:33.803073 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:33.808838 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:34.045254 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:34.051198 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:34.303505 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:34.306577 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:34.552158 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:34.552669 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:34.801308 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:34.805144 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:35.018120 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:35.045113 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:35.046563 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:35.304864 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:35.306894 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:35.547733 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:35.551000 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:35.801618 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:35.806517 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:36.044951 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:36.047266 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:36.301268 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:36.306373 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:36.565705 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:36.566712 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:36.801381 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:36.804800 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:37.018941 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:37.046905 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:37.048469 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:37.301660 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:37.306725 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:37.547257 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:37.550262 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:37.803515 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:37.809117 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:38.051125 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:38.054023 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:38.301806 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:38.304583 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:38.554356 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:38.555293 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:38.800296 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:38.804501 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:39.047807 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:39.048872 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:39.315348 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:39.323400 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:39.517848 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:39.560927 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:39.561480 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:39.801672 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:39.806290 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:40.047044 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:40.047580 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:40.302315 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:40.306916 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:40.552680 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:40.554062 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:40.803340 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:40.807455 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:41.048999 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:41.050209 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:41.302744 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:41.307136 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:41.519128 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:41.565765 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:41.566925 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:41.814133 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:41.826448 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:42.046494 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:42.047431 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:42.303119 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:42.306628 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:42.548565 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:42.549449 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:42.801589 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:42.804406 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:43.041698 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:43.044548 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:43.304594 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:43.306898 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:43.558994 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:43.560582 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:43.801184 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:43.804586 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:44.018311 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:44.047808 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:44.048532 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:44.303942 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:44.305157 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:44.547268 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:44.548484 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:44.804139 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:44.806206 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:45.045064 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:45.045384 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:45.301957 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:45.304681 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:45.545992 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:45.547209 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:45.801361 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:45.804267 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:46.043429 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:46.043647 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:46.302006 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:46.304301 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:46.517238 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:46.568114 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:46.568602 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:46.803432 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:46.806005 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:47.069278 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:47.072336 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:47.301502 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:47.305327 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:47.562800 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:47.564905 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:47.802759 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:47.806307 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:48.045804 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:48.047811 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:48.311121 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:48.314784 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:48.522333 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:48.551458 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:48.552483 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:48.801220 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:48.806178 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:49.045215 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:49.046791 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:49.300894 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:49.306108 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:49.558721 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:49.560008 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:49.801820 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:49.805873 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:50.043839 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:50.050651 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:50.301313 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:50.305374 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:50.555368 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:50.556673 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:50.802062 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:50.805529 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:51.017515 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:51.041918 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:51.044186 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:51.300418 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:51.304958 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:51.568214 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:51.571798 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:51.801604 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:51.806834 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:52.045001 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:52.046589 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:52.301481 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:52.305449 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:52.547091 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:52.547166 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:52.800506 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:52.804046 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:53.044077 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:53.045201 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:53.300993 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:53.304211 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:53.519232 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:53.548325 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:53.549387 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:53.802681 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:53.806153 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:54.049443 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:54.051465 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:54.303710 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:54.308080 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:54.552097 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:54.552927 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:54.801013 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:54.805067 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:55.043731 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:55.044544 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:55.301168 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:55.304037 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:55.546482 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:55.547849 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:55.803309 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:55.809242 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:56.019132 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:56.044911 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:56.047332 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:56.301808 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:56.307917 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:56.554774 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:56.561177 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:56.800580 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:56.804448 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:57.046522 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:57.048058 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:57.300534 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:57.305700 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:57.546707 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:57.548666 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:57.801494 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:57.819455 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:58.050238 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:58.052663 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:58.302091 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:58.305906 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:58.519290 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:58.550840 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:58.554151 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:58.813934 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:58.818593 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:59.052953 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:59.054984 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:59.303161 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:59.310007 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:59.553797 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:59.554614 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:59.806650 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:59.827378 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:00.051231 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:00.053099 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:00.305774 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:00.310771 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:00.521238 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:00.565208 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:00.565942 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:00.813624 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:00.815069 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:01.044622 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:01.045124 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:01.301193 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:01.306158 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:01.556452 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:01.558063 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:01.805115 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:01.808151 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:02.045354 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:02.054514 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:02.301986 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:02.304348 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:02.544675 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:02.545882 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:02.800720 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:02.804565 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:03.019725 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:03.042699 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:03.044041 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:03.301379 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:03.305105 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:03.545260 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:03.546731 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:03.801953 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:03.804350 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:04.043517 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:04.044375 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:04.301093 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:04.303956 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:04.554425 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:04.554880 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:04.805065 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:04.805741 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:05.024633 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:05.052208 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:05.053295 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:05.302467 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:05.305695 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:05.546753 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:05.548050 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:05.803668 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:05.806876 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:06.046189 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:06.048055 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:06.300613 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:06.304774 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:06.546019 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:06.547233 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:06.804000 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:06.808030 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:07.045513 1694879 kapi.go:107] duration metric: took 1m10.007361631s to wait for kubernetes.io/minikube-addons=registry ...
	I1007 13:06:07.046893 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:07.300266 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:07.303512 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:07.516036 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:07.546767 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:07.800772 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:07.805116 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:08.044618 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:08.302749 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:08.307823 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:08.546150 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:08.807946 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:08.808377 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:09.044094 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:09.301249 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:09.306110 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:09.519012 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:09.558361 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:09.810049 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:09.812263 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:10.047818 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:10.301937 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:10.305912 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:10.546099 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:10.801862 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:10.803981 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:11.047000 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:11.300650 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:11.305293 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:11.544164 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:11.806601 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:11.808029 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:12.020770 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:12.043425 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:12.300788 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:12.303961 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:12.545686 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:12.800491 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:12.803875 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:13.052927 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:13.301224 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:13.306063 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:13.547654 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:13.800946 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:13.805921 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:14.045218 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:14.301103 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:14.303913 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:14.517264 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:14.553621 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:14.804611 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:14.807876 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:15.045559 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:15.301248 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:15.304446 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:15.547372 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:15.801300 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:15.804397 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:16.044837 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:16.301074 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:16.304241 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:16.519260 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:16.545362 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:16.810486 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:16.811282 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:17.050369 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:17.300838 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:17.303936 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:17.547506 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:17.801804 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:17.806742 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:18.049844 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:18.301481 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:18.303950 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:18.547291 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:18.801300 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:18.805354 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:19.016605 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:19.044663 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:19.304560 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:19.307483 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:19.546626 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:19.800651 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:19.804200 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:20.044778 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:20.299976 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:20.304213 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:20.546086 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:20.806162 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:20.807861 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:21.018300 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:21.043856 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:21.300143 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:21.303913 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:21.550856 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:21.801803 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:21.806419 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:22.050724 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:22.301442 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:22.305383 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:22.547350 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:22.801439 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:22.804910 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:23.043098 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:23.301506 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:23.304158 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:23.517214 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:23.546017 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:23.802129 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:23.805663 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:24.043911 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:24.300689 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:24.304259 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:24.547690 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:24.800334 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:24.804402 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:25.044149 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:25.301362 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:25.304391 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:25.545010 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:25.801619 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:25.804667 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:26.027857 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:26.045762 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:26.301584 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:26.305382 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:26.545061 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:26.808846 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:26.809271 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:27.047166 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:27.300696 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:27.303794 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:27.543475 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:27.800216 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:27.803691 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:28.044996 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:28.301012 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:28.305378 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:28.522630 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:28.544834 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:28.809607 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:28.810797 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:29.044130 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:29.303099 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:29.304531 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:29.546927 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:29.801642 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:29.804565 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:30.046589 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:30.301518 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:30.304782 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:30.553412 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:30.802267 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:30.806077 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:31.016566 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:31.044029 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:31.302524 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:31.307648 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:31.545402 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:31.801180 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:31.804363 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:32.043196 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:32.300764 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:32.304013 1694879 kapi.go:107] duration metric: took 1m35.004707839s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1007 13:06:32.545169 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:32.801369 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:33.017511 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:33.044722 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:33.300184 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:33.544841 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:33.800845 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:34.044446 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:34.301180 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:34.545054 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:34.800935 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:35.018137 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:35.043833 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:35.300633 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:35.546660 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:35.801005 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:36.043805 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:36.300718 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:36.545814 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:36.801633 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:37.020064 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:37.044318 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:37.301171 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:37.543913 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:37.801805 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:38.052692 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:38.300730 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:38.555749 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:38.800844 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:39.045595 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:39.300064 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:39.517445 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:39.545028 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:39.801584 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:40.045171 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:40.300803 1694879 kapi.go:107] duration metric: took 1m39.004023429s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1007 13:06:40.303670 1694879 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-779469 cluster.
	I1007 13:06:40.306006 1694879 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1007 13:06:40.308200 1694879 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1007 13:06:40.557378 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:41.044771 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:41.519547 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:41.552889 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:42.052596 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:42.544140 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:43.051470 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:43.551924 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:44.016827 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:44.044217 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:44.548629 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:45.045645 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:45.556513 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:46.036352 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:46.044458 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:46.545616 1694879 kapi.go:107] duration metric: took 1m49.506552077s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1007 13:06:46.548548 1694879 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, cloud-spanner, default-storageclass, ingress-dns, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1007 13:06:46.551303 1694879 addons.go:510] duration metric: took 1m55.974106905s for enable addons: enabled=[storage-provisioner nvidia-device-plugin cloud-spanner default-storageclass ingress-dns metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1007 13:06:48.516436 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:51.021453 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:53.517374 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:56.017284 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:58.516670 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:07:00.517627 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:07:01.554977 1694879 pod_ready.go:93] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"True"
	I1007 13:07:01.555009 1694879 pod_ready.go:82] duration metric: took 1m28.544651449s for pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace to be "Ready" ...
	I1007 13:07:01.555025 1694879 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-mgxtx" in "kube-system" namespace to be "Ready" ...
	I1007 13:07:01.562427 1694879 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-mgxtx" in "kube-system" namespace has status "Ready":"True"
	I1007 13:07:01.562455 1694879 pod_ready.go:82] duration metric: took 7.420344ms for pod "nvidia-device-plugin-daemonset-mgxtx" in "kube-system" namespace to be "Ready" ...
	I1007 13:07:01.562477 1694879 pod_ready.go:39] duration metric: took 1m30.155806852s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:07:01.562519 1694879 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:07:01.562594 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:07:01.562686 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:07:01.636816 1694879 cri.go:89] found id: "b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b"
	I1007 13:07:01.636892 1694879 cri.go:89] found id: ""
	I1007 13:07:01.636919 1694879 logs.go:282] 1 containers: [b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b]
	I1007 13:07:01.636977 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:01.641530 1694879 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:07:01.641614 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:07:01.685865 1694879 cri.go:89] found id: "c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596"
	I1007 13:07:01.685891 1694879 cri.go:89] found id: ""
	I1007 13:07:01.685901 1694879 logs.go:282] 1 containers: [c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596]
	I1007 13:07:01.685984 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:01.689900 1694879 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:07:01.690019 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:07:01.735298 1694879 cri.go:89] found id: "be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d"
	I1007 13:07:01.735392 1694879 cri.go:89] found id: ""
	I1007 13:07:01.735401 1694879 logs.go:282] 1 containers: [be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d]
	I1007 13:07:01.735473 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:01.739444 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:07:01.739560 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:07:01.802326 1694879 cri.go:89] found id: "2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa"
	I1007 13:07:01.802412 1694879 cri.go:89] found id: ""
	I1007 13:07:01.802437 1694879 logs.go:282] 1 containers: [2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa]
	I1007 13:07:01.802537 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:01.808400 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:07:01.808585 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:07:01.876985 1694879 cri.go:89] found id: "24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680"
	I1007 13:07:01.877064 1694879 cri.go:89] found id: ""
	I1007 13:07:01.877092 1694879 logs.go:282] 1 containers: [24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680]
	I1007 13:07:01.877192 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:01.889084 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:07:01.889225 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:07:01.949358 1694879 cri.go:89] found id: "e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d"
	I1007 13:07:01.949445 1694879 cri.go:89] found id: ""
	I1007 13:07:01.949475 1694879 logs.go:282] 1 containers: [e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d]
	I1007 13:07:01.949597 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:01.955161 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:07:01.955243 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:07:01.999371 1694879 cri.go:89] found id: "f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd"
	I1007 13:07:01.999397 1694879 cri.go:89] found id: ""
	I1007 13:07:01.999406 1694879 logs.go:282] 1 containers: [f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd]
	I1007 13:07:01.999466 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:02.004588 1694879 logs.go:123] Gathering logs for dmesg ...
	I1007 13:07:02.004706 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:07:02.025290 1694879 logs.go:123] Gathering logs for kube-apiserver [b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b] ...
	I1007 13:07:02.025330 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b"
	I1007 13:07:02.088297 1694879 logs.go:123] Gathering logs for kube-scheduler [2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa] ...
	I1007 13:07:02.088336 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa"
	I1007 13:07:02.141863 1694879 logs.go:123] Gathering logs for kube-proxy [24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680] ...
	I1007 13:07:02.141899 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680"
	I1007 13:07:02.181758 1694879 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:07:02.181789 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:07:02.284862 1694879 logs.go:123] Gathering logs for container status ...
	I1007 13:07:02.284901 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:07:02.345618 1694879 logs.go:123] Gathering logs for kubelet ...
	I1007 13:07:02.345663 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:07:02.456779 1694879 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:07:02.456815 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 13:07:02.649199 1694879 logs.go:123] Gathering logs for etcd [c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596] ...
	I1007 13:07:02.649232 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596"
	I1007 13:07:02.700881 1694879 logs.go:123] Gathering logs for coredns [be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d] ...
	I1007 13:07:02.700915 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d"
	I1007 13:07:02.745922 1694879 logs.go:123] Gathering logs for kube-controller-manager [e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d] ...
	I1007 13:07:02.745956 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d"
	I1007 13:07:02.820534 1694879 logs.go:123] Gathering logs for kindnet [f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd] ...
	I1007 13:07:02.820632 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd"
	I1007 13:07:05.361896 1694879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:07:05.375854 1694879 api_server.go:72] duration metric: took 2m14.799038512s to wait for apiserver process to appear ...
	I1007 13:07:05.375889 1694879 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:07:05.375940 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:07:05.376012 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:07:05.417609 1694879 cri.go:89] found id: "b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b"
	I1007 13:07:05.417634 1694879 cri.go:89] found id: ""
	I1007 13:07:05.417643 1694879 logs.go:282] 1 containers: [b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b]
	I1007 13:07:05.417701 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:05.421384 1694879 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:07:05.421454 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:07:05.458868 1694879 cri.go:89] found id: "c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596"
	I1007 13:07:05.458893 1694879 cri.go:89] found id: ""
	I1007 13:07:05.458902 1694879 logs.go:282] 1 containers: [c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596]
	I1007 13:07:05.458958 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:05.462476 1694879 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:07:05.462549 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:07:05.500301 1694879 cri.go:89] found id: "be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d"
	I1007 13:07:05.500324 1694879 cri.go:89] found id: ""
	I1007 13:07:05.500337 1694879 logs.go:282] 1 containers: [be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d]
	I1007 13:07:05.500392 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:05.503989 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:07:05.504067 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:07:05.548036 1694879 cri.go:89] found id: "2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa"
	I1007 13:07:05.548059 1694879 cri.go:89] found id: ""
	I1007 13:07:05.548066 1694879 logs.go:282] 1 containers: [2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa]
	I1007 13:07:05.548179 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:05.552691 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:07:05.552766 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:07:05.591012 1694879 cri.go:89] found id: "24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680"
	I1007 13:07:05.591034 1694879 cri.go:89] found id: ""
	I1007 13:07:05.591042 1694879 logs.go:282] 1 containers: [24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680]
	I1007 13:07:05.591099 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:05.594535 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:07:05.594605 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:07:05.633759 1694879 cri.go:89] found id: "e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d"
	I1007 13:07:05.633782 1694879 cri.go:89] found id: ""
	I1007 13:07:05.633790 1694879 logs.go:282] 1 containers: [e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d]
	I1007 13:07:05.633851 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:05.637362 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:07:05.637434 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:07:05.674027 1694879 cri.go:89] found id: "f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd"
	I1007 13:07:05.674050 1694879 cri.go:89] found id: ""
	I1007 13:07:05.674058 1694879 logs.go:282] 1 containers: [f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd]
	I1007 13:07:05.674112 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:05.677736 1694879 logs.go:123] Gathering logs for dmesg ...
	I1007 13:07:05.677763 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:07:05.694714 1694879 logs.go:123] Gathering logs for coredns [be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d] ...
	I1007 13:07:05.694744 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d"
	I1007 13:07:05.742070 1694879 logs.go:123] Gathering logs for kube-scheduler [2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa] ...
	I1007 13:07:05.742101 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa"
	I1007 13:07:05.788710 1694879 logs.go:123] Gathering logs for kube-proxy [24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680] ...
	I1007 13:07:05.788742 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680"
	I1007 13:07:05.832263 1694879 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:07:05.832291 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:07:05.926099 1694879 logs.go:123] Gathering logs for container status ...
	I1007 13:07:05.926140 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:07:05.992863 1694879 logs.go:123] Gathering logs for kubelet ...
	I1007 13:07:05.992899 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:07:06.110371 1694879 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:07:06.110412 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 13:07:06.258372 1694879 logs.go:123] Gathering logs for kube-apiserver [b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b] ...
	I1007 13:07:06.258404 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b"
	I1007 13:07:06.316897 1694879 logs.go:123] Gathering logs for etcd [c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596] ...
	I1007 13:07:06.316938 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596"
	I1007 13:07:06.372349 1694879 logs.go:123] Gathering logs for kube-controller-manager [e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d] ...
	I1007 13:07:06.372379 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d"
	I1007 13:07:06.439905 1694879 logs.go:123] Gathering logs for kindnet [f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd] ...
	I1007 13:07:06.439944 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd"
	I1007 13:07:08.987296 1694879 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:07:08.995108 1694879 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1007 13:07:08.996143 1694879 api_server.go:141] control plane version: v1.31.1
	I1007 13:07:08.996174 1694879 api_server.go:131] duration metric: took 3.620276222s to wait for apiserver health ...
	I1007 13:07:08.996183 1694879 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:07:08.996206 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:07:08.996274 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:07:09.041738 1694879 cri.go:89] found id: "b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b"
	I1007 13:07:09.041759 1694879 cri.go:89] found id: ""
	I1007 13:07:09.041767 1694879 logs.go:282] 1 containers: [b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b]
	I1007 13:07:09.041855 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:09.045416 1694879 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:07:09.045491 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:07:09.086865 1694879 cri.go:89] found id: "c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596"
	I1007 13:07:09.086936 1694879 cri.go:89] found id: ""
	I1007 13:07:09.086958 1694879 logs.go:282] 1 containers: [c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596]
	I1007 13:07:09.087053 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:09.091107 1694879 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:07:09.091245 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:07:09.131475 1694879 cri.go:89] found id: "be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d"
	I1007 13:07:09.131570 1694879 cri.go:89] found id: ""
	I1007 13:07:09.131595 1694879 logs.go:282] 1 containers: [be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d]
	I1007 13:07:09.131671 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:09.135811 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:07:09.135944 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:07:09.174730 1694879 cri.go:89] found id: "2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa"
	I1007 13:07:09.174752 1694879 cri.go:89] found id: ""
	I1007 13:07:09.174766 1694879 logs.go:282] 1 containers: [2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa]
	I1007 13:07:09.174826 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:09.178945 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:07:09.179023 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:07:09.218036 1694879 cri.go:89] found id: "24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680"
	I1007 13:07:09.218059 1694879 cri.go:89] found id: ""
	I1007 13:07:09.218066 1694879 logs.go:282] 1 containers: [24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680]
	I1007 13:07:09.218134 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:09.221902 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:07:09.221982 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:07:09.260955 1694879 cri.go:89] found id: "e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d"
	I1007 13:07:09.261029 1694879 cri.go:89] found id: ""
	I1007 13:07:09.261052 1694879 logs.go:282] 1 containers: [e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d]
	I1007 13:07:09.261149 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:09.265100 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:07:09.265176 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:07:09.303621 1694879 cri.go:89] found id: "f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd"
	I1007 13:07:09.303645 1694879 cri.go:89] found id: ""
	I1007 13:07:09.303654 1694879 logs.go:282] 1 containers: [f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd]
	I1007 13:07:09.303711 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:09.307406 1694879 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:07:09.307434 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:07:09.402973 1694879 logs.go:123] Gathering logs for kubelet ...
	I1007 13:07:09.403008 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:07:09.508633 1694879 logs.go:123] Gathering logs for dmesg ...
	I1007 13:07:09.508671 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:07:09.526115 1694879 logs.go:123] Gathering logs for etcd [c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596] ...
	I1007 13:07:09.526146 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596"
	I1007 13:07:09.586886 1694879 logs.go:123] Gathering logs for coredns [be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d] ...
	I1007 13:07:09.586917 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d"
	I1007 13:07:09.625948 1694879 logs.go:123] Gathering logs for kube-scheduler [2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa] ...
	I1007 13:07:09.625977 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa"
	I1007 13:07:09.670140 1694879 logs.go:123] Gathering logs for kindnet [f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd] ...
	I1007 13:07:09.670171 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd"
	I1007 13:07:09.717533 1694879 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:07:09.717560 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 13:07:09.862438 1694879 logs.go:123] Gathering logs for kube-apiserver [b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b] ...
	I1007 13:07:09.862470 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b"
	I1007 13:07:09.918608 1694879 logs.go:123] Gathering logs for kube-proxy [24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680] ...
	I1007 13:07:09.918641 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680"
	I1007 13:07:09.960260 1694879 logs.go:123] Gathering logs for kube-controller-manager [e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d] ...
	I1007 13:07:09.960291 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d"
	I1007 13:07:10.048353 1694879 logs.go:123] Gathering logs for container status ...
	I1007 13:07:10.048391 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:07:12.619477 1694879 system_pods.go:59] 18 kube-system pods found
	I1007 13:07:12.619972 1694879 system_pods.go:61] "coredns-7c65d6cfc9-kfrdl" [14d3df12-3c3d-42c8-aa8c-b4df3c618109] Running
	I1007 13:07:12.620006 1694879 system_pods.go:61] "csi-hostpath-attacher-0" [ff752214-ae2a-4f9c-961b-c35b8e8ba378] Running
	I1007 13:07:12.620019 1694879 system_pods.go:61] "csi-hostpath-resizer-0" [e7ae9420-a05a-4b44-9fa3-4ed00911fdb6] Running
	I1007 13:07:12.620031 1694879 system_pods.go:61] "csi-hostpathplugin-zkm7b" [3c568c8f-d491-46a6-b174-813f2ebcb2db] Running
	I1007 13:07:12.620044 1694879 system_pods.go:61] "etcd-addons-779469" [b9acbc51-2544-4ede-9914-b047804d4588] Running
	I1007 13:07:12.620050 1694879 system_pods.go:61] "kindnet-7g5zx" [1fbe4b22-9d49-433e-a471-d43e712fac98] Running
	I1007 13:07:12.620060 1694879 system_pods.go:61] "kube-apiserver-addons-779469" [47acf6d3-9a8b-4f39-a33b-3597a6552c9d] Running
	I1007 13:07:12.620064 1694879 system_pods.go:61] "kube-controller-manager-addons-779469" [f50b4a30-f444-4092-a7aa-89de7f71f64c] Running
	I1007 13:07:12.620075 1694879 system_pods.go:61] "kube-ingress-dns-minikube" [a86273b1-4cac-4662-930e-44ffe2fcc91f] Running
	I1007 13:07:12.620084 1694879 system_pods.go:61] "kube-proxy-6ncrf" [b8ff1258-fb1b-4c1c-ad5f-039e47f37a2a] Running
	I1007 13:07:12.620089 1694879 system_pods.go:61] "kube-scheduler-addons-779469" [ba19f222-1069-45d1-9e3e-2a085a065db6] Running
	I1007 13:07:12.620093 1694879 system_pods.go:61] "metrics-server-84c5f94fbc-zhbq5" [aadc85ae-34d8-46da-8c72-e453e7246ef7] Running
	I1007 13:07:12.620097 1694879 system_pods.go:61] "nvidia-device-plugin-daemonset-mgxtx" [981684ce-573b-4c82-a5d9-19d8c41421ce] Running
	I1007 13:07:12.620104 1694879 system_pods.go:61] "registry-66c9cd494c-b8457" [37368b21-bd4d-4d7c-b2ee-31f62690e0b7] Running
	I1007 13:07:12.620109 1694879 system_pods.go:61] "registry-proxy-p4tjk" [7f540d5b-5976-4e89-b2f2-c934d659d3f3] Running
	I1007 13:07:12.620121 1694879 system_pods.go:61] "snapshot-controller-56fcc65765-dzq9x" [eb3418ae-d06e-4798-ab91-395da46f8aa0] Running
	I1007 13:07:12.620125 1694879 system_pods.go:61] "snapshot-controller-56fcc65765-zqkd5" [67b9d86f-dbed-4441-929d-1cc25f4c2d59] Running
	I1007 13:07:12.620136 1694879 system_pods.go:61] "storage-provisioner" [9832c3db-5664-45e0-8be0-4521d011f68b] Running
	I1007 13:07:12.620147 1694879 system_pods.go:74] duration metric: took 3.623953566s to wait for pod list to return data ...
	I1007 13:07:12.620160 1694879 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:07:12.622986 1694879 default_sa.go:45] found service account: "default"
	I1007 13:07:12.623011 1694879 default_sa.go:55] duration metric: took 2.837203ms for default service account to be created ...
	I1007 13:07:12.623020 1694879 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:07:12.632932 1694879 system_pods.go:86] 18 kube-system pods found
	I1007 13:07:12.632966 1694879 system_pods.go:89] "coredns-7c65d6cfc9-kfrdl" [14d3df12-3c3d-42c8-aa8c-b4df3c618109] Running
	I1007 13:07:12.632974 1694879 system_pods.go:89] "csi-hostpath-attacher-0" [ff752214-ae2a-4f9c-961b-c35b8e8ba378] Running
	I1007 13:07:12.632980 1694879 system_pods.go:89] "csi-hostpath-resizer-0" [e7ae9420-a05a-4b44-9fa3-4ed00911fdb6] Running
	I1007 13:07:12.632985 1694879 system_pods.go:89] "csi-hostpathplugin-zkm7b" [3c568c8f-d491-46a6-b174-813f2ebcb2db] Running
	I1007 13:07:12.632990 1694879 system_pods.go:89] "etcd-addons-779469" [b9acbc51-2544-4ede-9914-b047804d4588] Running
	I1007 13:07:12.632995 1694879 system_pods.go:89] "kindnet-7g5zx" [1fbe4b22-9d49-433e-a471-d43e712fac98] Running
	I1007 13:07:12.632999 1694879 system_pods.go:89] "kube-apiserver-addons-779469" [47acf6d3-9a8b-4f39-a33b-3597a6552c9d] Running
	I1007 13:07:12.633004 1694879 system_pods.go:89] "kube-controller-manager-addons-779469" [f50b4a30-f444-4092-a7aa-89de7f71f64c] Running
	I1007 13:07:12.633008 1694879 system_pods.go:89] "kube-ingress-dns-minikube" [a86273b1-4cac-4662-930e-44ffe2fcc91f] Running
	I1007 13:07:12.633018 1694879 system_pods.go:89] "kube-proxy-6ncrf" [b8ff1258-fb1b-4c1c-ad5f-039e47f37a2a] Running
	I1007 13:07:12.633023 1694879 system_pods.go:89] "kube-scheduler-addons-779469" [ba19f222-1069-45d1-9e3e-2a085a065db6] Running
	I1007 13:07:12.633033 1694879 system_pods.go:89] "metrics-server-84c5f94fbc-zhbq5" [aadc85ae-34d8-46da-8c72-e453e7246ef7] Running
	I1007 13:07:12.633038 1694879 system_pods.go:89] "nvidia-device-plugin-daemonset-mgxtx" [981684ce-573b-4c82-a5d9-19d8c41421ce] Running
	I1007 13:07:12.633044 1694879 system_pods.go:89] "registry-66c9cd494c-b8457" [37368b21-bd4d-4d7c-b2ee-31f62690e0b7] Running
	I1007 13:07:12.633051 1694879 system_pods.go:89] "registry-proxy-p4tjk" [7f540d5b-5976-4e89-b2f2-c934d659d3f3] Running
	I1007 13:07:12.633055 1694879 system_pods.go:89] "snapshot-controller-56fcc65765-dzq9x" [eb3418ae-d06e-4798-ab91-395da46f8aa0] Running
	I1007 13:07:12.633059 1694879 system_pods.go:89] "snapshot-controller-56fcc65765-zqkd5" [67b9d86f-dbed-4441-929d-1cc25f4c2d59] Running
	I1007 13:07:12.633063 1694879 system_pods.go:89] "storage-provisioner" [9832c3db-5664-45e0-8be0-4521d011f68b] Running
	I1007 13:07:12.633077 1694879 system_pods.go:126] duration metric: took 10.050502ms to wait for k8s-apps to be running ...
	I1007 13:07:12.633101 1694879 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:07:12.633165 1694879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:07:12.645370 1694879 system_svc.go:56] duration metric: took 12.259666ms WaitForService to wait for kubelet
	I1007 13:07:12.645396 1694879 kubeadm.go:582] duration metric: took 2m22.068585334s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:07:12.645417 1694879 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:07:12.649188 1694879 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 13:07:12.649221 1694879 node_conditions.go:123] node cpu capacity is 2
	I1007 13:07:12.649233 1694879 node_conditions.go:105] duration metric: took 3.811104ms to run NodePressure ...
	I1007 13:07:12.649246 1694879 start.go:241] waiting for startup goroutines ...
	I1007 13:07:12.649253 1694879 start.go:246] waiting for cluster config update ...
	I1007 13:07:12.649268 1694879 start.go:255] writing updated cluster config ...
	I1007 13:07:12.649573 1694879 ssh_runner.go:195] Run: rm -f paused
	I1007 13:07:13.000900 1694879 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:07:13.006700 1694879 out.go:177] * Done! kubectl is now configured to use "addons-779469" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 13:18:04 addons-779469 crio[968]: time="2024-10-07 13:18:04.938437568Z" level=info msg="Started container" PID=13899 containerID=7f30df4757c5e04fb7d6bf06334e177eb40e2c81e5e5d9a5c8e03d01a82f07b8 description=default/busybox/busybox id=a92c51f9-c51c-4164-b908-dd58894029eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd684e037cfa45b7f169a41cbcb9c03a688b40a2c8c342530345c43ce5a0194a
	Oct 07 13:19:37 addons-779469 crio[968]: time="2024-10-07 13:19:37.677110715Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-nkjm2/POD" id=cbddf6d2-3291-4f9a-998a-b335fee442e9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 07 13:19:37 addons-779469 crio[968]: time="2024-10-07 13:19:37.677167879Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 07 13:19:37 addons-779469 crio[968]: time="2024-10-07 13:19:37.710127584Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-nkjm2 Namespace:default ID:8d03b82f6f3076f0374223a91dee1490a211ada994325f9f406752e5d891ce40 UID:3ddf396e-9e9b-473e-9390-6c405dee8c4e NetNS:/var/run/netns/4f18011c-547d-4a08-8b72-3661a22dc4d1 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 07 13:19:37 addons-779469 crio[968]: time="2024-10-07 13:19:37.710181342Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-nkjm2 to CNI network \"kindnet\" (type=ptp)"
	Oct 07 13:19:37 addons-779469 crio[968]: time="2024-10-07 13:19:37.724184316Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-nkjm2 Namespace:default ID:8d03b82f6f3076f0374223a91dee1490a211ada994325f9f406752e5d891ce40 UID:3ddf396e-9e9b-473e-9390-6c405dee8c4e NetNS:/var/run/netns/4f18011c-547d-4a08-8b72-3661a22dc4d1 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 07 13:19:37 addons-779469 crio[968]: time="2024-10-07 13:19:37.724335237Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-nkjm2 for CNI network kindnet (type=ptp)"
	Oct 07 13:19:37 addons-779469 crio[968]: time="2024-10-07 13:19:37.726862880Z" level=info msg="Ran pod sandbox 8d03b82f6f3076f0374223a91dee1490a211ada994325f9f406752e5d891ce40 with infra container: default/hello-world-app-55bf9c44b4-nkjm2/POD" id=cbddf6d2-3291-4f9a-998a-b335fee442e9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 07 13:19:37 addons-779469 crio[968]: time="2024-10-07 13:19:37.728141881Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c97d7dc1-40d0-4100-aa49-7c99219af9d5 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 13:19:37 addons-779469 crio[968]: time="2024-10-07 13:19:37.728361330Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=c97d7dc1-40d0-4100-aa49-7c99219af9d5 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 13:19:37 addons-779469 crio[968]: time="2024-10-07 13:19:37.729003324Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=ecfbe0bf-ff36-4f63-868b-d247298c2276 name=/runtime.v1.ImageService/PullImage
	Oct 07 13:19:37 addons-779469 crio[968]: time="2024-10-07 13:19:37.734644138Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 07 13:19:38 addons-779469 crio[968]: time="2024-10-07 13:19:38.027692186Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 07 13:19:38 addons-779469 crio[968]: time="2024-10-07 13:19:38.774268017Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=ecfbe0bf-ff36-4f63-868b-d247298c2276 name=/runtime.v1.ImageService/PullImage
	Oct 07 13:19:38 addons-779469 crio[968]: time="2024-10-07 13:19:38.775183941Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=40079d56-34b2-463e-ae7c-83f204853b52 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 13:19:38 addons-779469 crio[968]: time="2024-10-07 13:19:38.775821792Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=40079d56-34b2-463e-ae7c-83f204853b52 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 13:19:38 addons-779469 crio[968]: time="2024-10-07 13:19:38.776673201Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=8a73d5a0-1ddb-4316-ac05-125c3b377e13 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 13:19:38 addons-779469 crio[968]: time="2024-10-07 13:19:38.777276534Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8a73d5a0-1ddb-4316-ac05-125c3b377e13 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 13:19:38 addons-779469 crio[968]: time="2024-10-07 13:19:38.779369283Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-nkjm2/hello-world-app" id=6b8e885d-3e3a-4c42-b940-7b7387b811b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 07 13:19:38 addons-779469 crio[968]: time="2024-10-07 13:19:38.779458496Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 07 13:19:38 addons-779469 crio[968]: time="2024-10-07 13:19:38.807937366Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/acb6d17263ec963fae31252cd54024ff22ed8f917b5eccc4e59cbfc6444ba864/merged/etc/passwd: no such file or directory"
	Oct 07 13:19:38 addons-779469 crio[968]: time="2024-10-07 13:19:38.807989681Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/acb6d17263ec963fae31252cd54024ff22ed8f917b5eccc4e59cbfc6444ba864/merged/etc/group: no such file or directory"
	Oct 07 13:19:38 addons-779469 crio[968]: time="2024-10-07 13:19:38.890848418Z" level=info msg="Created container f03a17aca82ab98b5be289238d8af0aa84b55f488fa47cf31be06670f26df301: default/hello-world-app-55bf9c44b4-nkjm2/hello-world-app" id=6b8e885d-3e3a-4c42-b940-7b7387b811b5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 07 13:19:38 addons-779469 crio[968]: time="2024-10-07 13:19:38.891740204Z" level=info msg="Starting container: f03a17aca82ab98b5be289238d8af0aa84b55f488fa47cf31be06670f26df301" id=5fc9ef8a-4ac1-4531-b4c6-53959b0bf1f6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 07 13:19:38 addons-779469 crio[968]: time="2024-10-07 13:19:38.903001435Z" level=info msg="Started container" PID=14085 containerID=f03a17aca82ab98b5be289238d8af0aa84b55f488fa47cf31be06670f26df301 description=default/hello-world-app-55bf9c44b4-nkjm2/hello-world-app id=5fc9ef8a-4ac1-4531-b4c6-53959b0bf1f6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8d03b82f6f3076f0374223a91dee1490a211ada994325f9f406752e5d891ce40
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	f03a17aca82ab       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   8d03b82f6f307       hello-world-app-55bf9c44b4-nkjm2
	7f30df4757c5e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          About a minute ago       Running             busybox                   0                   fd684e037cfa4       busybox
	c42b6b433c54a       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago            Running             nginx                     0                   d3b9e9b83b78e       nginx
	26c6bfdc08fd3       registry.k8s.io/ingress-nginx/controller@sha256:22f9d129ae8c89a2cabbd13af3c1668944f3dd68fec186199b7024a0a2fc75b3             12 minutes ago           Running             controller                0                   cbfa9a000b43f       ingress-nginx-controller-bc57996ff-nlxnm
	05c76cf82fc4b       420193b27261a8d37b9fb1faeed45094cefa47e72a7538fd5a6c05e8b5ce192e                                                             13 minutes ago           Exited              patch                     2                   6cc4c0bb3a3de       ingress-nginx-admission-patch-9jhtl
	9bfab86c6a487       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             13 minutes ago           Running             local-path-provisioner    0                   35b2b92b491a2       local-path-provisioner-86d989889c-rrhx5
	db654ea3a3f2e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   13 minutes ago           Exited              create                    0                   90be504daff12       ingress-nginx-admission-create-69978
	fc0b148e46b99       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        13 minutes ago           Running             metrics-server            0                   ae537516a580d       metrics-server-84c5f94fbc-zhbq5
	adbf96df2004d       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             14 minutes ago           Running             minikube-ingress-dns      0                   fd24691374db4       kube-ingress-dns-minikube
	457b9e07e729d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             14 minutes ago           Running             storage-provisioner       0                   71893a9e25358       storage-provisioner
	be3a55f354462       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             14 minutes ago           Running             coredns                   0                   6ff1e68c7ffac       coredns-7c65d6cfc9-kfrdl
	f5c08bdd49644       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                             14 minutes ago           Running             kindnet-cni               0                   27d28a7719dbe       kindnet-7g5zx
	24b2cc84e135f       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                             14 minutes ago           Running             kube-proxy                0                   32f4c8a9cf354       kube-proxy-6ncrf
	e48b3531357e8       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                             15 minutes ago           Running             kube-controller-manager   0                   39515888518c6       kube-controller-manager-addons-779469
	2e2a39495c277       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                             15 minutes ago           Running             kube-scheduler            0                   ee8de6b60a689       kube-scheduler-addons-779469
	b8cf421e0e643       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                             15 minutes ago           Running             kube-apiserver            0                   93c97f093c738       kube-apiserver-addons-779469
	c0d2a0e8c63b6       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             15 minutes ago           Running             etcd                      0                   8da69e060c1ec       etcd-addons-779469
	
	
	==> coredns [be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d] <==
	[INFO] 10.244.0.7:47238 - 21186 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002787366s
	[INFO] 10.244.0.7:47238 - 52888 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000133569s
	[INFO] 10.244.0.7:47238 - 23194 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000103316s
	[INFO] 10.244.0.7:46474 - 55627 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000142413s
	[INFO] 10.244.0.7:46474 - 55406 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000046465s
	[INFO] 10.244.0.7:44034 - 32883 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000123673s
	[INFO] 10.244.0.7:44034 - 33055 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000070735s
	[INFO] 10.244.0.7:40584 - 47565 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094193s
	[INFO] 10.244.0.7:40584 - 47153 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090131s
	[INFO] 10.244.0.7:48715 - 22552 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001710254s
	[INFO] 10.244.0.7:48715 - 22980 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.0016573s
	[INFO] 10.244.0.7:34923 - 30467 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000047793s
	[INFO] 10.244.0.7:34923 - 30336 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000079538s
	[INFO] 10.244.0.20:42014 - 24661 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000177777s
	[INFO] 10.244.0.20:57376 - 24748 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000127111s
	[INFO] 10.244.0.20:47050 - 775 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000095112s
	[INFO] 10.244.0.20:59445 - 15350 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108314s
	[INFO] 10.244.0.20:43795 - 45545 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000084002s
	[INFO] 10.244.0.20:43461 - 46587 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000097303s
	[INFO] 10.244.0.20:59067 - 9049 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002266369s
	[INFO] 10.244.0.20:44165 - 39978 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002128829s
	[INFO] 10.244.0.20:47605 - 45227 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00163412s
	[INFO] 10.244.0.20:53946 - 12634 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001483232s
	[INFO] 10.244.0.23:54767 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000223954s
	[INFO] 10.244.0.23:39206 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00015349s
	
	
	==> describe nodes <==
	Name:               addons-779469
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-779469
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=addons-779469
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T13_04_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-779469
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 13:04:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-779469
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 13:19:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 13:18:23 +0000   Mon, 07 Oct 2024 13:04:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 13:18:23 +0000   Mon, 07 Oct 2024 13:04:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 13:18:23 +0000   Mon, 07 Oct 2024 13:04:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 13:18:23 +0000   Mon, 07 Oct 2024 13:05:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-779469
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b185a85bbe647cab0ea7a44daf0565d
	  System UUID:                54196e40-8b0f-42a4-8424-aec1d4cf9b79
	  Boot ID:                    aa802e8e-7a27-4e80-bbf6-ed0c45666ec2
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     hello-world-app-55bf9c44b4-nkjm2            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-nlxnm    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         14m
	  kube-system                 coredns-7c65d6cfc9-kfrdl                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-addons-779469                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-7g5zx                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-addons-779469                250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-779469       200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-6ncrf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-779469                100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-84c5f94fbc-zhbq5             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         14m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          local-path-provisioner-86d989889c-rrhx5     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             510Mi (6%)   220Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 14m   kube-proxy       
	  Normal   Starting                 14m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  14m   kubelet          Node addons-779469 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m   kubelet          Node addons-779469 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m   kubelet          Node addons-779469 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m   node-controller  Node addons-779469 event: Registered Node addons-779469 in Controller
	  Normal   NodeReady                14m   kubelet          Node addons-779469 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596] <==
	{"level":"info","ts":"2024-10-07T13:04:38.940137Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T13:04:38.940200Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T13:04:38.943666Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-07T13:04:38.943741Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-07T13:04:51.260957Z","caller":"traceutil/trace.go:171","msg":"trace[1187122494] transaction","detail":"{read_only:false; response_revision:354; number_of_response:1; }","duration":"153.946896ms","start":"2024-10-07T13:04:51.106987Z","end":"2024-10-07T13:04:51.260934Z","steps":["trace[1187122494] 'process raft request'  (duration: 98.213628ms)","trace[1187122494] 'compare'  (duration: 54.408645ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T13:04:53.405087Z","caller":"traceutil/trace.go:171","msg":"trace[1322881476] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"116.257936ms","start":"2024-10-07T13:04:53.288813Z","end":"2024-10-07T13:04:53.405071Z","steps":["trace[1322881476] 'process raft request'  (duration: 115.890716ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:04:53.540216Z","caller":"traceutil/trace.go:171","msg":"trace[1909237663] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"116.726373ms","start":"2024-10-07T13:04:53.423472Z","end":"2024-10-07T13:04:53.540199Z","steps":["trace[1909237663] 'process raft request'  (duration: 100.233669ms)","trace[1909237663] 'compare'  (duration: 15.994262ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T13:04:53.540450Z","caller":"traceutil/trace.go:171","msg":"trace[1465488526] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"116.88906ms","start":"2024-10-07T13:04:53.423554Z","end":"2024-10-07T13:04:53.540443Z","steps":["trace[1465488526] 'process raft request'  (duration: 116.239467ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:04:53.706090Z","caller":"traceutil/trace.go:171","msg":"trace[1880002528] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"105.141531ms","start":"2024-10-07T13:04:53.600724Z","end":"2024-10-07T13:04:53.705866Z","steps":["trace[1880002528] 'process raft request'  (duration: 45.286156ms)","trace[1880002528] 'compare'  (duration: 59.685221ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T13:04:54.094112Z","caller":"traceutil/trace.go:171","msg":"trace[1510152245] linearizableReadLoop","detail":"{readStateIndex:387; appliedIndex:384; }","duration":"100.181173ms","start":"2024-10-07T13:04:53.993917Z","end":"2024-10-07T13:04:54.094098Z","steps":["trace[1510152245] 'read index received'  (duration: 132.124µs)","trace[1510152245] 'applied index is now lower than readState.Index'  (duration: 100.024713ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T13:04:54.094431Z","caller":"traceutil/trace.go:171","msg":"trace[1936988130] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"142.348415ms","start":"2024-10-07T13:04:53.952044Z","end":"2024-10-07T13:04:54.094393Z","steps":["trace[1936988130] 'process raft request'  (duration: 107.914219ms)","trace[1936988130] 'compare'  (duration: 33.993976ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T13:04:54.094723Z","caller":"traceutil/trace.go:171","msg":"trace[859127559] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"100.979359ms","start":"2024-10-07T13:04:53.993737Z","end":"2024-10-07T13:04:54.094716Z","steps":["trace[859127559] 'process raft request'  (duration: 100.306397ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:04:54.094980Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.046483ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-779469\" ","response":"range_response_count:1 size:5745"}
	{"level":"info","ts":"2024-10-07T13:04:54.095091Z","caller":"traceutil/trace.go:171","msg":"trace[1057137103] range","detail":"{range_begin:/registry/minions/addons-779469; range_end:; response_count:1; response_revision:374; }","duration":"101.169574ms","start":"2024-10-07T13:04:53.993913Z","end":"2024-10-07T13:04:54.095083Z","steps":["trace[1057137103] 'agreement among raft nodes before linearized reading'  (duration: 100.974739ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:04:54.935903Z","caller":"traceutil/trace.go:171","msg":"trace[2134927529] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"129.506183ms","start":"2024-10-07T13:04:54.806374Z","end":"2024-10-07T13:04:54.935880Z","steps":["trace[2134927529] 'process raft request'  (duration: 129.385849ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:04:54.936569Z","caller":"traceutil/trace.go:171","msg":"trace[940512204] linearizableReadLoop","detail":"{readStateIndex:404; appliedIndex:404; }","duration":"127.00876ms","start":"2024-10-07T13:04:54.809548Z","end":"2024-10-07T13:04:54.936557Z","steps":["trace[940512204] 'read index received'  (duration: 127.00259ms)","trace[940512204] 'applied index is now lower than readState.Index'  (duration: 4.694µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-07T13:04:54.963043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.474773ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-addons-779469\" ","response":"range_response_count:1 size:4488"}
	{"level":"info","ts":"2024-10-07T13:04:54.963107Z","caller":"traceutil/trace.go:171","msg":"trace[585043007] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-addons-779469; range_end:; response_count:1; response_revision:389; }","duration":"153.550275ms","start":"2024-10-07T13:04:54.809542Z","end":"2024-10-07T13:04:54.963093Z","steps":["trace[585043007] 'agreement among raft nodes before linearized reading'  (duration: 127.249912ms)","trace[585043007] 'range keys from in-memory index tree'  (duration: 26.193469ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T13:04:54.968215Z","caller":"traceutil/trace.go:171","msg":"trace[936267434] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"149.443043ms","start":"2024-10-07T13:04:54.818751Z","end":"2024-10-07T13:04:54.968194Z","steps":["trace[936267434] 'process raft request'  (duration: 139.955985ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:04:54.981989Z","caller":"traceutil/trace.go:171","msg":"trace[249121898] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"153.922862ms","start":"2024-10-07T13:04:54.828050Z","end":"2024-10-07T13:04:54.981973Z","steps":["trace[249121898] 'process raft request'  (duration: 140.097332ms)","trace[249121898] 'compare'  (duration: 13.727842ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-07T13:04:54.999564Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.302993ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:04:54.999727Z","caller":"traceutil/trace.go:171","msg":"trace[1621784510] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:399; }","duration":"180.45997ms","start":"2024-10-07T13:04:54.819239Z","end":"2024-10-07T13:04:54.999699Z","steps":["trace[1621784510] 'agreement among raft nodes before linearized reading'  (duration: 180.265127ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:14:40.182383Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1494}
	{"level":"info","ts":"2024-10-07T13:14:40.217210Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1494,"took":"34.266919ms","hash":3207175570,"current-db-size-bytes":5992448,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":3166208,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2024-10-07T13:14:40.217265Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3207175570,"revision":1494,"compact-revision":-1}
	
	
	==> kernel <==
	 13:19:39 up 1 day,  3:02,  0 users,  load average: 0.26, 0.80, 1.54
	Linux addons-779469 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd] <==
	I1007 13:17:30.798686       1 main.go:299] handling current node
	I1007 13:17:40.798640       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:17:40.798759       1 main.go:299] handling current node
	I1007 13:17:50.798889       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:17:50.798927       1 main.go:299] handling current node
	I1007 13:18:00.799676       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:18:00.799712       1 main.go:299] handling current node
	I1007 13:18:10.800153       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:18:10.800188       1 main.go:299] handling current node
	I1007 13:18:20.798672       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:18:20.798726       1 main.go:299] handling current node
	I1007 13:18:30.802179       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:18:30.802238       1 main.go:299] handling current node
	I1007 13:18:40.806260       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:18:40.806298       1 main.go:299] handling current node
	I1007 13:18:50.799480       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:18:50.799519       1 main.go:299] handling current node
	I1007 13:19:00.798680       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:19:00.798717       1 main.go:299] handling current node
	I1007 13:19:10.801147       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:19:10.801261       1 main.go:299] handling current node
	I1007 13:19:20.807603       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:19:20.807715       1 main.go:299] handling current node
	I1007 13:19:30.806519       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:19:30.806551       1 main.go:299] handling current node
	
	
	==> kube-apiserver [b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b] <==
	 > logger="UnhandledError"
	I1007 13:07:01.553470       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1007 13:15:26.424354       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.241.1"}
	E1007 13:15:41.568056       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1007 13:15:54.347732       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1007 13:15:57.808470       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1007 13:16:30.579213       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1007 13:16:57.742849       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 13:16:57.742981       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 13:16:57.799184       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 13:16:57.800053       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 13:16:57.898292       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 13:16:57.898339       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 13:16:57.903196       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 13:16:57.903329       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 13:16:57.930205       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 13:16:57.930254       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1007 13:16:58.898588       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1007 13:16:58.930693       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1007 13:16:59.047840       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1007 13:17:11.585912       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1007 13:17:12.636729       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1007 13:17:17.183142       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1007 13:17:17.472221       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.109.87"}
	I1007 13:19:37.636077       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.138.164"}
	
	
	==> kube-controller-manager [e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d] <==
	I1007 13:17:52.606229       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-779469"
	W1007 13:18:07.119597       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 13:18:07.119642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 13:18:15.245289       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 13:18:15.245337       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 13:18:16.995262       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 13:18:16.995402       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 13:18:23.220276       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-779469"
	W1007 13:18:23.508913       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 13:18:23.508958       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 13:18:54.678630       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 13:18:54.678769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 13:18:55.257293       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 13:18:55.257334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 13:19:01.371017       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 13:19:01.371059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 13:19:14.021576       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 13:19:14.021622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 13:19:37.381128       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="37.320591ms"
	I1007 13:19:37.400715       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="18.501046ms"
	I1007 13:19:37.401683       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="31.745µs"
	W1007 13:19:37.488348       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 13:19:37.488478       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 13:19:39.140944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="15.310166ms"
	I1007 13:19:39.141092       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="33.69µs"
	
	
	==> kube-proxy [24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680] <==
	I1007 13:04:50.447774       1 server_linux.go:66] "Using iptables proxy"
	I1007 13:04:50.560966       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1007 13:04:50.561114       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 13:04:50.646955       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1007 13:04:50.647095       1 server_linux.go:169] "Using iptables Proxier"
	I1007 13:04:50.651856       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 13:04:50.652510       1 server.go:483] "Version info" version="v1.31.1"
	I1007 13:04:50.652585       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 13:04:50.669284       1 config.go:199] "Starting service config controller"
	I1007 13:04:50.669861       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 13:04:50.669944       1 config.go:105] "Starting endpoint slice config controller"
	I1007 13:04:50.669982       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 13:04:50.670583       1 config.go:328] "Starting node config controller"
	I1007 13:04:50.672000       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 13:04:50.772635       1 shared_informer.go:320] Caches are synced for node config
	I1007 13:04:50.779828       1 shared_informer.go:320] Caches are synced for service config
	I1007 13:04:50.779911       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa] <==
	W1007 13:04:43.027679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1007 13:04:43.027794       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.027913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 13:04:43.027962       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.028079       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 13:04:43.028123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.028214       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 13:04:43.028254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.028421       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 13:04:43.028468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.028559       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 13:04:43.028642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.028609       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 13:04:43.028796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.030342       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 13:04:43.030377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.030346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 13:04:43.030402       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.030466       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 13:04:43.030514       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.030529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 13:04:43.030624       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.030476       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 13:04:43.030655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1007 13:04:44.321571       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 13:18:05 addons-779469 kubelet[1493]: E1007 13:18:05.127062    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307085126788634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:18:05 addons-779469 kubelet[1493]: E1007 13:18:05.127104    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307085126788634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:18:05 addons-779469 kubelet[1493]: I1007 13:18:05.921948    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 13:18:05 addons-779469 kubelet[1493]: I1007 13:18:05.933509    1493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=47.000342938 podStartE2EDuration="48.93348908s" podCreationTimestamp="2024-10-07 13:17:17 +0000 UTC" firstStartedPulling="2024-10-07 13:17:17.756885684 +0000 UTC m=+753.063082299" lastFinishedPulling="2024-10-07 13:17:19.690031826 +0000 UTC m=+754.996228441" observedRunningTime="2024-10-07 13:17:19.843253949 +0000 UTC m=+755.149450572" watchObservedRunningTime="2024-10-07 13:18:05.93348908 +0000 UTC m=+801.239685687"
	Oct 07 13:18:15 addons-779469 kubelet[1493]: E1007 13:18:15.130035    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307095129770288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:18:15 addons-779469 kubelet[1493]: E1007 13:18:15.130079    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307095129770288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:18:25 addons-779469 kubelet[1493]: E1007 13:18:25.133230    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307105132939013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:18:25 addons-779469 kubelet[1493]: E1007 13:18:25.133274    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307105132939013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:18:35 addons-779469 kubelet[1493]: E1007 13:18:35.135994    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307115135700570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:18:35 addons-779469 kubelet[1493]: E1007 13:18:35.136034    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307115135700570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:18:45 addons-779469 kubelet[1493]: E1007 13:18:45.139457    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307125139101245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:18:45 addons-779469 kubelet[1493]: E1007 13:18:45.139503    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307125139101245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:18:55 addons-779469 kubelet[1493]: E1007 13:18:55.142521    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307135142286145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:18:55 addons-779469 kubelet[1493]: E1007 13:18:55.142560    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307135142286145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:19:05 addons-779469 kubelet[1493]: E1007 13:19:05.146052    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307145145756114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:19:05 addons-779469 kubelet[1493]: E1007 13:19:05.146104    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307145145756114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:19:07 addons-779469 kubelet[1493]: I1007 13:19:07.813495    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 13:19:15 addons-779469 kubelet[1493]: E1007 13:19:15.149137    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307155148894846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:19:15 addons-779469 kubelet[1493]: E1007 13:19:15.149170    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307155148894846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:19:25 addons-779469 kubelet[1493]: E1007 13:19:25.152159    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307165151593526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:19:25 addons-779469 kubelet[1493]: E1007 13:19:25.152201    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307165151593526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:19:35 addons-779469 kubelet[1493]: E1007 13:19:35.154992    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307175154464014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:19:35 addons-779469 kubelet[1493]: E1007 13:19:35.155031    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307175154464014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:19:37 addons-779469 kubelet[1493]: I1007 13:19:37.375300    1493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=93.597503728 podStartE2EDuration="12m24.375283755s" podCreationTimestamp="2024-10-07 13:07:13 +0000 UTC" firstStartedPulling="2024-10-07 13:07:14.086568544 +0000 UTC m=+149.392765159" lastFinishedPulling="2024-10-07 13:18:04.864348571 +0000 UTC m=+800.170545186" observedRunningTime="2024-10-07 13:18:05.936376074 +0000 UTC m=+801.242572681" watchObservedRunningTime="2024-10-07 13:19:37.375283755 +0000 UTC m=+892.681480362"
	Oct 07 13:19:37 addons-779469 kubelet[1493]: I1007 13:19:37.475219    1493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bm6c\" (UniqueName: \"kubernetes.io/projected/3ddf396e-9e9b-473e-9390-6c405dee8c4e-kube-api-access-7bm6c\") pod \"hello-world-app-55bf9c44b4-nkjm2\" (UID: \"3ddf396e-9e9b-473e-9390-6c405dee8c4e\") " pod="default/hello-world-app-55bf9c44b4-nkjm2"
	
	
	==> storage-provisioner [457b9e07e729d3ad0810718988e98d201b8b41ad16425a6d14268f34d6e00015] <==
	I1007 13:05:32.311284       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 13:05:32.329362       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 13:05:32.330915       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 13:05:32.346166       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 13:05:32.350005       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a27e81e1-200b-4e26-81a3-ca764a02c265", APIVersion:"v1", ResourceVersion:"886", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-779469_1abdc3ad-4d93-43cd-9a9c-3999da9bd98b became leader
	I1007 13:05:32.350254       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-779469_1abdc3ad-4d93-43cd-9a9c-3999da9bd98b!
	I1007 13:05:32.455090       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-779469_1abdc3ad-4d93-43cd-9a9c-3999da9bd98b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-779469 -n addons-779469
helpers_test.go:261: (dbg) Run:  kubectl --context addons-779469 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-69978 ingress-nginx-admission-patch-9jhtl
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-779469 describe pod ingress-nginx-admission-create-69978 ingress-nginx-admission-patch-9jhtl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-779469 describe pod ingress-nginx-admission-create-69978 ingress-nginx-admission-patch-9jhtl: exit status 1 (84.913129ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-69978" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9jhtl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-779469 describe pod ingress-nginx-admission-create-69978 ingress-nginx-admission-patch-9jhtl: exit status 1
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-779469 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-779469 addons disable ingress-dns --alsologtostderr -v=1: (1.649708619s)
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-779469 addons disable ingress --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-779469 addons disable ingress --alsologtostderr -v=1: (7.778199253s)
--- FAIL: TestAddons/parallel/Ingress (153.13s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (340.6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.566371ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-zhbq5" [aadc85ae-34d8-46da-8c72-e453e7246ef7] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003652221s
addons_test.go:402: (dbg) Run:  kubectl --context addons-779469 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-779469 top pods -n kube-system: exit status 1 (99.219216ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kfrdl, age: 11m13.907494567s

                                                
                                                
** /stderr **
I1007 13:16:03.910937 1694126 retry.go:31] will retry after 3.935847171s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-779469 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-779469 top pods -n kube-system: exit status 1 (85.997529ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kfrdl, age: 11m17.930003281s

                                                
                                                
** /stderr **
I1007 13:16:07.933134 1694126 retry.go:31] will retry after 2.676092192s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-779469 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-779469 top pods -n kube-system: exit status 1 (91.996432ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kfrdl, age: 11m20.699375218s

                                                
                                                
** /stderr **
I1007 13:16:10.702286 1694126 retry.go:31] will retry after 5.915470094s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-779469 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-779469 top pods -n kube-system: exit status 1 (104.108696ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kfrdl, age: 11m26.719148117s

                                                
                                                
** /stderr **
I1007 13:16:16.722535 1694126 retry.go:31] will retry after 11.928853479s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-779469 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-779469 top pods -n kube-system: exit status 1 (90.623375ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kfrdl, age: 11m38.739584454s

                                                
                                                
** /stderr **
I1007 13:16:28.742645 1694126 retry.go:31] will retry after 18.556314602s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-779469 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-779469 top pods -n kube-system: exit status 1 (93.760405ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kfrdl, age: 11m57.390535629s

                                                
                                                
** /stderr **
I1007 13:16:47.393974 1694126 retry.go:31] will retry after 28.745184294s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-779469 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-779469 top pods -n kube-system: exit status 1 (81.630554ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kfrdl, age: 12m26.219498335s

                                                
                                                
** /stderr **
I1007 13:17:16.222341 1694126 retry.go:31] will retry after 27.269339236s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-779469 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-779469 top pods -n kube-system: exit status 1 (95.292956ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kfrdl, age: 12m53.587727893s

                                                
                                                
** /stderr **
I1007 13:17:43.591192 1694126 retry.go:31] will retry after 45.565992206s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-779469 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-779469 top pods -n kube-system: exit status 1 (88.700677ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kfrdl, age: 13m39.243545683s

                                                
                                                
** /stderr **
I1007 13:18:29.246461 1694126 retry.go:31] will retry after 1m0.348748167s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-779469 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-779469 top pods -n kube-system: exit status 1 (84.797677ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kfrdl, age: 14m39.676876668s

                                                
                                                
** /stderr **
I1007 13:19:29.680566 1694126 retry.go:31] will retry after 50.862924581s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-779469 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-779469 top pods -n kube-system: exit status 1 (84.261113ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kfrdl, age: 15m30.629740981s

                                                
                                                
** /stderr **
I1007 13:20:20.632686 1694126 retry.go:31] will retry after 34.40505207s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-779469 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-779469 top pods -n kube-system: exit status 1 (108.383868ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kfrdl, age: 16m5.14163347s

                                                
                                                
** /stderr **
I1007 13:20:55.147030 1694126 retry.go:31] will retry after 40.126850207s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-779469 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-779469 top pods -n kube-system: exit status 1 (91.483372ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-kfrdl, age: 16m45.362750287s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-779469
helpers_test.go:235: (dbg) docker inspect addons-779469:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "af34deb52be076fb7ac68abb938793931472379dd5995e00397ab399714f2ba6",
	        "Created": "2024-10-07T13:04:23.307101975Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1695380,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-07T13:04:23.458809165Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/af34deb52be076fb7ac68abb938793931472379dd5995e00397ab399714f2ba6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/af34deb52be076fb7ac68abb938793931472379dd5995e00397ab399714f2ba6/hostname",
	        "HostsPath": "/var/lib/docker/containers/af34deb52be076fb7ac68abb938793931472379dd5995e00397ab399714f2ba6/hosts",
	        "LogPath": "/var/lib/docker/containers/af34deb52be076fb7ac68abb938793931472379dd5995e00397ab399714f2ba6/af34deb52be076fb7ac68abb938793931472379dd5995e00397ab399714f2ba6-json.log",
	        "Name": "/addons-779469",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-779469:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-779469",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c32aa627b8409b0875544f84c1089059aa0cd4f7097ccf2d6b61621994b0f35b-init/diff:/var/lib/docker/overlay2/ba883e93760810ee908affcdb026e83ee6095990c52f4c87c201773cc7ffeb3e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c32aa627b8409b0875544f84c1089059aa0cd4f7097ccf2d6b61621994b0f35b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c32aa627b8409b0875544f84c1089059aa0cd4f7097ccf2d6b61621994b0f35b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c32aa627b8409b0875544f84c1089059aa0cd4f7097ccf2d6b61621994b0f35b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-779469",
	                "Source": "/var/lib/docker/volumes/addons-779469/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-779469",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-779469",
	                "name.minikube.sigs.k8s.io": "addons-779469",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c79b5bc4d521014ff2c5e3df210959d3649b4aabd99b5264e82c5bf5ec5e97e7",
	            "SandboxKey": "/var/run/docker/netns/c79b5bc4d521",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38266"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38267"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38270"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38268"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38269"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-779469": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "6c467b33615e9b23694556cf703c67534d9664704d2d9881f48bf748b99e88c5",
	                    "EndpointID": "87bad789c084c926cda9e505f87a4b6890b436c51a7024d8284faaa41d5f2b8d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-779469",
	                        "af34deb52be0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-779469 -n addons-779469
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-779469 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-779469 logs -n 25: (1.38019102s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-951215 | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC |                     |
	|         | download-docker-951215                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-951215                                                                   | download-docker-951215 | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC | 07 Oct 24 13:03 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-354806   | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC |                     |
	|         | binary-mirror-354806                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38505                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-354806                                                                     | binary-mirror-354806   | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC | 07 Oct 24 13:03 UTC |
	| addons  | enable dashboard -p                                                                         | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC |                     |
	|         | addons-779469                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC |                     |
	|         | addons-779469                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-779469 --wait=true                                                                | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC | 07 Oct 24 13:07 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-779469 addons disable                                                                | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:07 UTC | 07 Oct 24 13:07 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-779469 addons disable                                                                | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:15 UTC | 07 Oct 24 13:15 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:15 UTC | 07 Oct 24 13:15 UTC |
	|         | -p addons-779469                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-779469 addons disable                                                                | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:15 UTC | 07 Oct 24 13:15 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-779469 ip                                                                            | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:15 UTC | 07 Oct 24 13:15 UTC |
	| addons  | addons-779469 addons disable                                                                | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:15 UTC | 07 Oct 24 13:15 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-779469 addons disable                                                                | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:15 UTC | 07 Oct 24 13:15 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:15 UTC | 07 Oct 24 13:15 UTC |
	|         | -p addons-779469                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-779469 ssh cat                                                                       | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:15 UTC | 07 Oct 24 13:15 UTC |
	|         | /opt/local-path-provisioner/pvc-ef2e515d-a253-470e-a4c5-ae9b384f01de_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-779469 addons disable                                                                | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:15 UTC | 07 Oct 24 13:15 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-779469 addons                                                                        | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:16 UTC | 07 Oct 24 13:16 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-779469 addons                                                                        | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:16 UTC | 07 Oct 24 13:16 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-779469 addons                                                                        | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:16 UTC | 07 Oct 24 13:17 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-779469 addons                                                                        | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:17 UTC | 07 Oct 24 13:17 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-779469 ssh curl -s                                                                   | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:17 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-779469 ip                                                                            | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:19 UTC | 07 Oct 24 13:19 UTC |
	| addons  | addons-779469 addons disable                                                                | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:19 UTC | 07 Oct 24 13:19 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-779469 addons disable                                                                | addons-779469          | jenkins | v1.34.0 | 07 Oct 24 13:19 UTC | 07 Oct 24 13:19 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 13:03:58
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 13:03:58.590991 1694879 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:03:58.591152 1694879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:03:58.591178 1694879 out.go:358] Setting ErrFile to fd 2...
	I1007 13:03:58.591198 1694879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:03:58.591461 1694879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-1688750/.minikube/bin
	I1007 13:03:58.591960 1694879 out.go:352] Setting JSON to false
	I1007 13:03:58.592858 1694879 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":96390,"bootTime":1728209849,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1007 13:03:58.592934 1694879 start.go:139] virtualization:  
	I1007 13:03:58.595954 1694879 out.go:177] * [addons-779469] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 13:03:58.598496 1694879 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:03:58.598537 1694879 notify.go:220] Checking for updates...
	I1007 13:03:58.601961 1694879 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:03:58.604140 1694879 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-1688750/kubeconfig
	I1007 13:03:58.606741 1694879 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-1688750/.minikube
	I1007 13:03:58.608845 1694879 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 13:03:58.610518 1694879 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:03:58.612435 1694879 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:03:58.639675 1694879 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 13:03:58.639810 1694879 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:03:58.692800 1694879 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-07 13:03:58.683328708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:03:58.692924 1694879 docker.go:318] overlay module found
	I1007 13:03:58.695118 1694879 out.go:177] * Using the docker driver based on user configuration
	I1007 13:03:58.696768 1694879 start.go:297] selected driver: docker
	I1007 13:03:58.696786 1694879 start.go:901] validating driver "docker" against <nil>
	I1007 13:03:58.696801 1694879 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:03:58.697423 1694879 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:03:58.750179 1694879 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-07 13:03:58.740210496 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:03:58.750395 1694879 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 13:03:58.750629 1694879 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:03:58.752278 1694879 out.go:177] * Using Docker driver with root privileges
	I1007 13:03:58.753840 1694879 cni.go:84] Creating CNI manager for ""
	I1007 13:03:58.753910 1694879 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 13:03:58.753924 1694879 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 13:03:58.754011 1694879 start.go:340] cluster config:
	{Name:addons-779469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-779469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:03:58.755978 1694879 out.go:177] * Starting "addons-779469" primary control-plane node in "addons-779469" cluster
	I1007 13:03:58.757363 1694879 cache.go:121] Beginning downloading kic base image for docker with crio
	I1007 13:03:58.758550 1694879 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 13:03:58.759864 1694879 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:03:58.759918 1694879 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-1688750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1007 13:03:58.759926 1694879 cache.go:56] Caching tarball of preloaded images
	I1007 13:03:58.760010 1694879 preload.go:172] Found /home/jenkins/minikube-integration/18424-1688750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1007 13:03:58.760020 1694879 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 13:03:58.760356 1694879 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/config.json ...
	I1007 13:03:58.760376 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/config.json: {Name:mkadf868b80152a3a366ce24c34abe79891c74a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:03:58.760458 1694879 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 13:03:58.774541 1694879 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1007 13:03:58.774674 1694879 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1007 13:03:58.774700 1694879 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1007 13:03:58.774706 1694879 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1007 13:03:58.774713 1694879 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1007 13:03:58.774719 1694879 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from local cache
	I1007 13:04:15.947604 1694879 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from cached tarball
	I1007 13:04:15.947642 1694879 cache.go:194] Successfully downloaded all kic artifacts
	I1007 13:04:15.947685 1694879 start.go:360] acquireMachinesLock for addons-779469: {Name:mkf6a3f1a5f9f020586f81ac1ba0c0c9f942937c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:04:15.947820 1694879 start.go:364] duration metric: took 107.649µs to acquireMachinesLock for "addons-779469"
	I1007 13:04:15.947850 1694879 start.go:93] Provisioning new machine with config: &{Name:addons-779469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-779469 Namespace:default APIServerHAVIP: APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:04:15.947925 1694879 start.go:125] createHost starting for "" (driver="docker")
	I1007 13:04:15.949634 1694879 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1007 13:04:15.949885 1694879 start.go:159] libmachine.API.Create for "addons-779469" (driver="docker")
	I1007 13:04:15.949923 1694879 client.go:168] LocalClient.Create starting
	I1007 13:04:15.950045 1694879 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem
	I1007 13:04:16.110919 1694879 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/cert.pem
	I1007 13:04:17.570128 1694879 cli_runner.go:164] Run: docker network inspect addons-779469 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1007 13:04:17.589460 1694879 cli_runner.go:211] docker network inspect addons-779469 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1007 13:04:17.589555 1694879 network_create.go:284] running [docker network inspect addons-779469] to gather additional debugging logs...
	I1007 13:04:17.589575 1694879 cli_runner.go:164] Run: docker network inspect addons-779469
	W1007 13:04:17.604775 1694879 cli_runner.go:211] docker network inspect addons-779469 returned with exit code 1
	I1007 13:04:17.604806 1694879 network_create.go:287] error running [docker network inspect addons-779469]: docker network inspect addons-779469: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-779469 not found
	I1007 13:04:17.604820 1694879 network_create.go:289] output of [docker network inspect addons-779469]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-779469 not found
	
	** /stderr **
	I1007 13:04:17.604921 1694879 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 13:04:17.620548 1694879 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40006a5d80}
	I1007 13:04:17.620593 1694879 network_create.go:124] attempt to create docker network addons-779469 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1007 13:04:17.620658 1694879 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-779469 addons-779469
	I1007 13:04:17.690204 1694879 network_create.go:108] docker network addons-779469 192.168.49.0/24 created
	I1007 13:04:17.690243 1694879 kic.go:121] calculated static IP "192.168.49.2" for the "addons-779469" container
	I1007 13:04:17.690321 1694879 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1007 13:04:17.705519 1694879 cli_runner.go:164] Run: docker volume create addons-779469 --label name.minikube.sigs.k8s.io=addons-779469 --label created_by.minikube.sigs.k8s.io=true
	I1007 13:04:17.723148 1694879 oci.go:103] Successfully created a docker volume addons-779469
	I1007 13:04:17.723250 1694879 cli_runner.go:164] Run: docker run --rm --name addons-779469-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-779469 --entrypoint /usr/bin/test -v addons-779469:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib
	I1007 13:04:19.245037 1694879 cli_runner.go:217] Completed: docker run --rm --name addons-779469-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-779469 --entrypoint /usr/bin/test -v addons-779469:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib: (1.521744864s)
	I1007 13:04:19.245065 1694879 oci.go:107] Successfully prepared a docker volume addons-779469
	I1007 13:04:19.245090 1694879 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:04:19.245112 1694879 kic.go:194] Starting extracting preloaded images to volume ...
	I1007 13:04:19.245178 1694879 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18424-1688750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-779469:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir
	I1007 13:04:23.243433 1694879 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18424-1688750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-779469:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir: (3.998215142s)
	I1007 13:04:23.243472 1694879 kic.go:203] duration metric: took 3.998355652s to extract preloaded images to volume ...
	W1007 13:04:23.243637 1694879 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1007 13:04:23.243761 1694879 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1007 13:04:23.292652 1694879 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-779469 --name addons-779469 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-779469 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-779469 --network addons-779469 --ip 192.168.49.2 --volume addons-779469:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122
	I1007 13:04:23.632644 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Running}}
	I1007 13:04:23.660183 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:23.686759 1694879 cli_runner.go:164] Run: docker exec addons-779469 stat /var/lib/dpkg/alternatives/iptables
	I1007 13:04:23.761450 1694879 oci.go:144] the created container "addons-779469" has a running status.
	I1007 13:04:23.761487 1694879 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa...
	I1007 13:04:24.130726 1694879 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1007 13:04:24.160447 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:24.187373 1694879 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1007 13:04:24.187400 1694879 kic_runner.go:114] Args: [docker exec --privileged addons-779469 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1007 13:04:24.275771 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:24.297104 1694879 machine.go:93] provisionDockerMachine start ...
	I1007 13:04:24.297203 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:24.319107 1694879 main.go:141] libmachine: Using SSH client type: native
	I1007 13:04:24.319381 1694879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38266 <nil> <nil>}
	I1007 13:04:24.319391 1694879 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:04:24.492365 1694879 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-779469
	
	I1007 13:04:24.492393 1694879 ubuntu.go:169] provisioning hostname "addons-779469"
	I1007 13:04:24.492467 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:24.514646 1694879 main.go:141] libmachine: Using SSH client type: native
	I1007 13:04:24.514881 1694879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38266 <nil> <nil>}
	I1007 13:04:24.514898 1694879 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-779469 && echo "addons-779469" | sudo tee /etc/hostname
	I1007 13:04:24.667369 1694879 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-779469
	
	I1007 13:04:24.667483 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:24.692536 1694879 main.go:141] libmachine: Using SSH client type: native
	I1007 13:04:24.692771 1694879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38266 <nil> <nil>}
	I1007 13:04:24.692788 1694879 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-779469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-779469/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-779469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:04:24.832129 1694879 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:04:24.832152 1694879 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18424-1688750/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-1688750/.minikube}
	I1007 13:04:24.832179 1694879 ubuntu.go:177] setting up certificates
	I1007 13:04:24.832194 1694879 provision.go:84] configureAuth start
	I1007 13:04:24.832256 1694879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-779469
	I1007 13:04:24.852945 1694879 provision.go:143] copyHostCerts
	I1007 13:04:24.853027 1694879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.pem (1078 bytes)
	I1007 13:04:24.853179 1694879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-1688750/.minikube/cert.pem (1123 bytes)
	I1007 13:04:24.853237 1694879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-1688750/.minikube/key.pem (1679 bytes)
	I1007 13:04:24.853289 1694879 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca-key.pem org=jenkins.addons-779469 san=[127.0.0.1 192.168.49.2 addons-779469 localhost minikube]
	I1007 13:04:25.022336 1694879 provision.go:177] copyRemoteCerts
	I1007 13:04:25.022413 1694879 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:04:25.022459 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:25.039356 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:25.137146 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1007 13:04:25.163503 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 13:04:25.189557 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 13:04:25.213561 1694879 provision.go:87] duration metric: took 381.353185ms to configureAuth
	I1007 13:04:25.213592 1694879 ubuntu.go:193] setting minikube options for container-runtime
	I1007 13:04:25.213794 1694879 config.go:182] Loaded profile config "addons-779469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:04:25.213899 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:25.231245 1694879 main.go:141] libmachine: Using SSH client type: native
	I1007 13:04:25.231492 1694879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38266 <nil> <nil>}
	I1007 13:04:25.231515 1694879 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:04:25.469435 1694879 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:04:25.469456 1694879 machine.go:96] duration metric: took 1.172331015s to provisionDockerMachine
	I1007 13:04:25.469466 1694879 client.go:171] duration metric: took 9.519533983s to LocalClient.Create
	I1007 13:04:25.469485 1694879 start.go:167] duration metric: took 9.519602642s to libmachine.API.Create "addons-779469"
	I1007 13:04:25.469493 1694879 start.go:293] postStartSetup for "addons-779469" (driver="docker")
	I1007 13:04:25.469505 1694879 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:04:25.469568 1694879 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:04:25.469618 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:25.488283 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:25.584813 1694879 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:04:25.588088 1694879 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 13:04:25.588126 1694879 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 13:04:25.588138 1694879 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 13:04:25.588145 1694879 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 13:04:25.588157 1694879 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-1688750/.minikube/addons for local assets ...
	I1007 13:04:25.588230 1694879 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-1688750/.minikube/files for local assets ...
	I1007 13:04:25.588255 1694879 start.go:296] duration metric: took 118.757032ms for postStartSetup
	I1007 13:04:25.588573 1694879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-779469
	I1007 13:04:25.606306 1694879 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/config.json ...
	I1007 13:04:25.606599 1694879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 13:04:25.606660 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:25.624497 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:25.716445 1694879 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 13:04:25.721031 1694879 start.go:128] duration metric: took 9.773089629s to createHost
	I1007 13:04:25.721056 1694879 start.go:83] releasing machines lock for "addons-779469", held for 9.773224469s
	I1007 13:04:25.721126 1694879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-779469
	I1007 13:04:25.737691 1694879 ssh_runner.go:195] Run: cat /version.json
	I1007 13:04:25.737747 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:25.737992 1694879 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:04:25.738066 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:25.760615 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:25.767638 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:25.983413 1694879 ssh_runner.go:195] Run: systemctl --version
	I1007 13:04:25.987517 1694879 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:04:26.129013 1694879 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 13:04:26.133366 1694879 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:04:26.153582 1694879 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1007 13:04:26.153721 1694879 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:04:26.186914 1694879 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1007 13:04:26.186936 1694879 start.go:495] detecting cgroup driver to use...
	I1007 13:04:26.186968 1694879 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 13:04:26.187020 1694879 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:04:26.203920 1694879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:04:26.214902 1694879 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:04:26.215006 1694879 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:04:26.229011 1694879 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:04:26.243502 1694879 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:04:26.330579 1694879 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:04:26.430502 1694879 docker.go:233] disabling docker service ...
	I1007 13:04:26.430609 1694879 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:04:26.450474 1694879 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:04:26.462448 1694879 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:04:26.553050 1694879 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:04:26.645854 1694879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:04:26.656539 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:04:26.673215 1694879 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:04:26.673280 1694879 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:04:26.687625 1694879 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:04:26.687692 1694879 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:04:26.697653 1694879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:04:26.707037 1694879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:04:26.716576 1694879 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:04:26.725855 1694879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:04:26.735253 1694879 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:04:26.751096 1694879 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:04:26.761047 1694879 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:04:26.769851 1694879 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:04:26.778394 1694879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:04:26.857036 1694879 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:04:26.968429 1694879 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:04:26.968538 1694879 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:04:26.972057 1694879 start.go:563] Will wait 60s for crictl version
	I1007 13:04:26.972121 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:04:26.975460 1694879 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:04:27.014942 1694879 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1007 13:04:27.015067 1694879 ssh_runner.go:195] Run: crio --version
	I1007 13:04:27.053702 1694879 ssh_runner.go:195] Run: crio --version
	I1007 13:04:27.098344 1694879 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1007 13:04:27.101084 1694879 cli_runner.go:164] Run: docker network inspect addons-779469 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 13:04:27.116339 1694879 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1007 13:04:27.119972 1694879 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:04:27.130894 1694879 kubeadm.go:883] updating cluster {Name:addons-779469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-779469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:04:27.131024 1694879 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:04:27.131083 1694879 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:04:27.206230 1694879 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:04:27.206251 1694879 crio.go:433] Images already preloaded, skipping extraction
	I1007 13:04:27.206308 1694879 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:04:27.244345 1694879 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:04:27.244369 1694879 cache_images.go:84] Images are preloaded, skipping loading
	I1007 13:04:27.244379 1694879 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I1007 13:04:27.244466 1694879 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-779469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-779469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:04:27.244554 1694879 ssh_runner.go:195] Run: crio config
	I1007 13:04:27.318963 1694879 cni.go:84] Creating CNI manager for ""
	I1007 13:04:27.319032 1694879 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 13:04:27.319056 1694879 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:04:27.319105 1694879 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-779469 NodeName:addons-779469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 13:04:27.319281 1694879 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-779469"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:04:27.319370 1694879 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:04:27.328356 1694879 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:04:27.328448 1694879 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:04:27.337493 1694879 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1007 13:04:27.357221 1694879 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:04:27.376150 1694879 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I1007 13:04:27.394117 1694879 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1007 13:04:27.397643 1694879 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:04:27.408542 1694879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:04:27.502146 1694879 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:04:27.516652 1694879 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469 for IP: 192.168.49.2
	I1007 13:04:27.516688 1694879 certs.go:194] generating shared ca certs ...
	I1007 13:04:27.516705 1694879 certs.go:226] acquiring lock for ca certs: {Name:mk3a082a64706c071bb4db632f3ec05c7c14e01c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:27.516862 1694879 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.key
	I1007 13:04:27.924465 1694879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt ...
	I1007 13:04:27.924499 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt: {Name:mk0870e61242f9fe806e59e090e40476885a4ab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:27.925223 1694879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.key ...
	I1007 13:04:27.925239 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.key: {Name:mk3a5f0507ac2ca23a463229c2fb9e6c7860bcf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:27.925770 1694879 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.key
	I1007 13:04:28.348730 1694879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.crt ...
	I1007 13:04:28.348767 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.crt: {Name:mk90cbb5a99d3b72d5722f5c1e82e601a619dd18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:28.349450 1694879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.key ...
	I1007 13:04:28.349468 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.key: {Name:mk17258ccf583bd5881068f5e4a136c22883f9c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:28.349561 1694879 certs.go:256] generating profile certs ...
	I1007 13:04:28.349624 1694879 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.key
	I1007 13:04:28.349651 1694879 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt with IP's: []
	I1007 13:04:28.716095 1694879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt ...
	I1007 13:04:28.716127 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: {Name:mke8035127e1a22111a029f870eb1cb4e1bed430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:28.716333 1694879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.key ...
	I1007 13:04:28.716347 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.key: {Name:mk388e6481fa92945a975aa0160fe88892b596ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:28.716442 1694879 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.key.72e8297d
	I1007 13:04:28.716464 1694879 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.crt.72e8297d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1007 13:04:28.936226 1694879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.crt.72e8297d ...
	I1007 13:04:28.936258 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.crt.72e8297d: {Name:mkc10581cb757e3538060d902f3ecb30de78eabb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:28.937016 1694879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.key.72e8297d ...
	I1007 13:04:28.937039 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.key.72e8297d: {Name:mk537dca66b298768d37bc7187b56749a9900f90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:28.937145 1694879 certs.go:381] copying /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.crt.72e8297d -> /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.crt
	I1007 13:04:28.937224 1694879 certs.go:385] copying /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.key.72e8297d -> /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.key
	I1007 13:04:28.937286 1694879 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/proxy-client.key
	I1007 13:04:28.937308 1694879 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/proxy-client.crt with IP's: []
	I1007 13:04:29.483883 1694879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/proxy-client.crt ...
	I1007 13:04:29.483915 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/proxy-client.crt: {Name:mkbafe7318ae053f255591f295b86bd3887ed668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:29.484647 1694879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/proxy-client.key ...
	I1007 13:04:29.484665 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/proxy-client.key: {Name:mkd0b891d7056adc4eeb3f9fd4497e3c47643866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:29.484898 1694879 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 13:04:29.484941 1694879 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem (1078 bytes)
	I1007 13:04:29.484973 1694879 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:04:29.485002 1694879 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/key.pem (1679 bytes)
	I1007 13:04:29.485683 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:04:29.511317 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:04:29.534975 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:04:29.559430 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 13:04:29.583286 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 13:04:29.607102 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 13:04:29.631206 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:04:29.655040 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 13:04:29.687871 1694879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:04:29.717204 1694879 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:04:29.747436 1694879 ssh_runner.go:195] Run: openssl version
	I1007 13:04:29.753907 1694879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:04:29.764054 1694879 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:04:29.768403 1694879 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 13:04 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:04:29.768466 1694879 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:04:29.777192 1694879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:04:29.789006 1694879 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:04:29.792759 1694879 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 13:04:29.792822 1694879 kubeadm.go:392] StartCluster: {Name:addons-779469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-779469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:04:29.792917 1694879 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:04:29.793013 1694879 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:04:29.833084 1694879 cri.go:89] found id: ""
	I1007 13:04:29.833166 1694879 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:04:29.841940 1694879 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:04:29.850762 1694879 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1007 13:04:29.850876 1694879 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:04:29.859936 1694879 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:04:29.859952 1694879 kubeadm.go:157] found existing configuration files:
	
	I1007 13:04:29.860003 1694879 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:04:29.869105 1694879 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:04:29.869171 1694879 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:04:29.877991 1694879 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:04:29.886615 1694879 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:04:29.886699 1694879 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:04:29.895220 1694879 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:04:29.904408 1694879 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:04:29.904498 1694879 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:04:29.913216 1694879 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:04:29.922011 1694879 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:04:29.922077 1694879 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:04:29.932551 1694879 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1007 13:04:29.975370 1694879 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 13:04:29.975447 1694879 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:04:29.995604 1694879 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1007 13:04:29.995745 1694879 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1007 13:04:29.995807 1694879 kubeadm.go:310] OS: Linux
	I1007 13:04:29.995884 1694879 kubeadm.go:310] CGROUPS_CPU: enabled
	I1007 13:04:29.995952 1694879 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1007 13:04:29.996024 1694879 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1007 13:04:29.996094 1694879 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1007 13:04:29.996183 1694879 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1007 13:04:29.996257 1694879 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1007 13:04:29.996335 1694879 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1007 13:04:29.996401 1694879 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1007 13:04:29.996474 1694879 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1007 13:04:30.078699 1694879 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:04:30.078844 1694879 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:04:30.078956 1694879 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 13:04:30.087013 1694879 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:04:30.090786 1694879 out.go:235]   - Generating certificates and keys ...
	I1007 13:04:30.090967 1694879 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:04:30.091050 1694879 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:04:30.834750 1694879 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 13:04:31.620888 1694879 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 13:04:31.762991 1694879 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 13:04:32.252935 1694879 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 13:04:33.355774 1694879 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 13:04:33.355978 1694879 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-779469 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1007 13:04:33.719632 1694879 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 13:04:33.719835 1694879 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-779469 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1007 13:04:34.331388 1694879 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 13:04:34.756092 1694879 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 13:04:35.127458 1694879 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 13:04:35.127819 1694879 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:04:35.310105 1694879 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:04:35.888803 1694879 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:04:36.113311 1694879 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:04:36.243172 1694879 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:04:36.415322 1694879 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:04:36.416048 1694879 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:04:36.419031 1694879 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:04:36.422254 1694879 out.go:235]   - Booting up control plane ...
	I1007 13:04:36.422355 1694879 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:04:36.422432 1694879 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:04:36.425898 1694879 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:04:36.441216 1694879 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:04:36.447158 1694879 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:04:36.447222 1694879 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:04:36.545968 1694879 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:04:36.546094 1694879 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:04:38.046754 1694879 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.500829679s
	I1007 13:04:38.046843 1694879 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:04:44.048664 1694879 kubeadm.go:310] [api-check] The API server is healthy after 6.001968403s
	I1007 13:04:44.069713 1694879 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:04:44.086252 1694879 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:04:44.113930 1694879 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:04:44.114186 1694879 kubeadm.go:310] [mark-control-plane] Marking the node addons-779469 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:04:44.125912 1694879 kubeadm.go:310] [bootstrap-token] Using token: 61fkzd.5r98z9kc930n9kup
	I1007 13:04:44.130342 1694879 out.go:235]   - Configuring RBAC rules ...
	I1007 13:04:44.130474 1694879 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:04:44.133629 1694879 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:04:44.142573 1694879 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:04:44.148930 1694879 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:04:44.153305 1694879 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:04:44.158316 1694879 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:04:44.457738 1694879 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:04:44.898676 1694879 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:04:45.457304 1694879 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:04:45.457329 1694879 kubeadm.go:310] 
	I1007 13:04:45.457392 1694879 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:04:45.457405 1694879 kubeadm.go:310] 
	I1007 13:04:45.457482 1694879 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:04:45.457490 1694879 kubeadm.go:310] 
	I1007 13:04:45.457515 1694879 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:04:45.457576 1694879 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:04:45.457633 1694879 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:04:45.457641 1694879 kubeadm.go:310] 
	I1007 13:04:45.457695 1694879 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:04:45.457703 1694879 kubeadm.go:310] 
	I1007 13:04:45.457750 1694879 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:04:45.457758 1694879 kubeadm.go:310] 
	I1007 13:04:45.457810 1694879 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:04:45.457888 1694879 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:04:45.457958 1694879 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:04:45.457967 1694879 kubeadm.go:310] 
	I1007 13:04:45.458051 1694879 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:04:45.458130 1694879 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:04:45.458140 1694879 kubeadm.go:310] 
	I1007 13:04:45.458225 1694879 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 61fkzd.5r98z9kc930n9kup \
	I1007 13:04:45.458330 1694879 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:659002c3c36ab0885bf81fe4258f61cead5b2d03fd8e3c7ecf684b765e0cd0b4 \
	I1007 13:04:45.458354 1694879 kubeadm.go:310] 	--control-plane 
	I1007 13:04:45.458361 1694879 kubeadm.go:310] 
	I1007 13:04:45.458445 1694879 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:04:45.458453 1694879 kubeadm.go:310] 
	I1007 13:04:45.458534 1694879 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 61fkzd.5r98z9kc930n9kup \
	I1007 13:04:45.458637 1694879 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:659002c3c36ab0885bf81fe4258f61cead5b2d03fd8e3c7ecf684b765e0cd0b4 
	I1007 13:04:45.461461 1694879 kubeadm.go:310] W1007 13:04:29.972119    1187 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:04:45.461763 1694879 kubeadm.go:310] W1007 13:04:29.972978    1187 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:04:45.461977 1694879 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1007 13:04:45.462088 1694879 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:04:45.462107 1694879 cni.go:84] Creating CNI manager for ""
	I1007 13:04:45.462116 1694879 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 13:04:45.466791 1694879 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 13:04:45.469531 1694879 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 13:04:45.473229 1694879 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 13:04:45.473247 1694879 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 13:04:45.490661 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 13:04:45.788501 1694879 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:04:45.788636 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:45.788720 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-779469 minikube.k8s.io/updated_at=2024_10_07T13_04_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=addons-779469 minikube.k8s.io/primary=true
	I1007 13:04:45.956045 1694879 ops.go:34] apiserver oom_adj: -16
	I1007 13:04:45.956169 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:46.456926 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:46.956422 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:47.456397 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:47.956755 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:48.456749 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:48.956473 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:49.456743 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:49.956766 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:50.456294 1694879 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:04:50.575420 1694879 kubeadm.go:1113] duration metric: took 4.786831402s to wait for elevateKubeSystemPrivileges
	I1007 13:04:50.575454 1694879 kubeadm.go:394] duration metric: took 20.782634659s to StartCluster
	I1007 13:04:50.575477 1694879 settings.go:142] acquiring lock: {Name:mkc4eef6ec2cbdb287b7d49da88f957f9ede0465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:50.575648 1694879 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-1688750/kubeconfig
	I1007 13:04:50.576043 1694879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/kubeconfig: {Name:mkae782d6e0841d1e777fb7cb23057f0dd940052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:04:50.576777 1694879 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:04:50.576907 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 13:04:50.577150 1694879 config.go:182] Loaded profile config "addons-779469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:04:50.577179 1694879 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1007 13:04:50.577260 1694879 addons.go:69] Setting yakd=true in profile "addons-779469"
	I1007 13:04:50.577276 1694879 addons.go:234] Setting addon yakd=true in "addons-779469"
	I1007 13:04:50.577298 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.577802 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.578313 1694879 addons.go:69] Setting cloud-spanner=true in profile "addons-779469"
	I1007 13:04:50.578339 1694879 addons.go:234] Setting addon cloud-spanner=true in "addons-779469"
	I1007 13:04:50.578366 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.578387 1694879 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-779469"
	I1007 13:04:50.578404 1694879 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-779469"
	I1007 13:04:50.578428 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.578772 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.578832 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.582368 1694879 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-779469"
	I1007 13:04:50.582440 1694879 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-779469"
	I1007 13:04:50.582481 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.582976 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.583674 1694879 addons.go:69] Setting default-storageclass=true in profile "addons-779469"
	I1007 13:04:50.583712 1694879 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-779469"
	I1007 13:04:50.584105 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.600536 1694879 addons.go:69] Setting registry=true in profile "addons-779469"
	I1007 13:04:50.601090 1694879 addons.go:234] Setting addon registry=true in "addons-779469"
	I1007 13:04:50.601737 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.602806 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.603974 1694879 addons.go:69] Setting gcp-auth=true in profile "addons-779469"
	I1007 13:04:50.604051 1694879 mustload.go:65] Loading cluster: addons-779469
	I1007 13:04:50.604311 1694879 config.go:182] Loaded profile config "addons-779469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:04:50.604739 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.616393 1694879 addons.go:69] Setting ingress=true in profile "addons-779469"
	I1007 13:04:50.616431 1694879 addons.go:234] Setting addon ingress=true in "addons-779469"
	I1007 13:04:50.616473 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.616943 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.624183 1694879 addons.go:69] Setting storage-provisioner=true in profile "addons-779469"
	I1007 13:04:50.624420 1694879 addons.go:234] Setting addon storage-provisioner=true in "addons-779469"
	I1007 13:04:50.624602 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.625144 1694879 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-779469"
	I1007 13:04:50.625178 1694879 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-779469"
	I1007 13:04:50.625496 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.631700 1694879 addons.go:69] Setting ingress-dns=true in profile "addons-779469"
	I1007 13:04:50.631792 1694879 addons.go:234] Setting addon ingress-dns=true in "addons-779469"
	I1007 13:04:50.631886 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.632688 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.648949 1694879 addons.go:69] Setting volcano=true in profile "addons-779469"
	I1007 13:04:50.649033 1694879 addons.go:234] Setting addon volcano=true in "addons-779469"
	I1007 13:04:50.649099 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.649621 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.649872 1694879 addons.go:69] Setting inspektor-gadget=true in profile "addons-779469"
	I1007 13:04:50.649890 1694879 addons.go:234] Setting addon inspektor-gadget=true in "addons-779469"
	I1007 13:04:50.649916 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.650296 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.669521 1694879 out.go:177] * Verifying Kubernetes components...
	I1007 13:04:50.677498 1694879 addons.go:69] Setting metrics-server=true in profile "addons-779469"
	I1007 13:04:50.679567 1694879 addons.go:234] Setting addon metrics-server=true in "addons-779469"
	I1007 13:04:50.679652 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.677583 1694879 addons.go:69] Setting volumesnapshots=true in profile "addons-779469"
	I1007 13:04:50.695712 1694879 addons.go:234] Setting addon volumesnapshots=true in "addons-779469"
	I1007 13:04:50.695790 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.699229 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.705488 1694879 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1007 13:04:50.709477 1694879 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1007 13:04:50.709562 1694879 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1007 13:04:50.709697 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:50.716051 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.723004 1694879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:04:50.723713 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.731433 1694879 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1007 13:04:50.734879 1694879 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 13:04:50.734900 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1007 13:04:50.734981 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:50.758376 1694879 addons.go:234] Setting addon default-storageclass=true in "addons-779469"
	I1007 13:04:50.758443 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.758919 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.778112 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.789134 1694879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1007 13:04:50.799707 1694879 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1007 13:04:50.802329 1694879 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1007 13:04:50.809960 1694879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1007 13:04:50.818521 1694879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1007 13:04:50.821258 1694879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1007 13:04:50.821360 1694879 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1007 13:04:50.851327 1694879 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1007 13:04:50.857514 1694879 out.go:177]   - Using image docker.io/registry:2.8.3
	I1007 13:04:50.857676 1694879 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1007 13:04:50.861620 1694879 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 13:04:50.861696 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1007 13:04:50.861802 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:50.862122 1694879 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1007 13:04:50.862171 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1007 13:04:50.862248 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:50.894559 1694879 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 13:04:50.897688 1694879 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 13:04:50.911772 1694879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1007 13:04:50.912093 1694879 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1007 13:04:50.912116 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1007 13:04:50.912202 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	W1007 13:04:50.927909 1694879 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1007 13:04:50.940202 1694879 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1007 13:04:50.942680 1694879 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:04:50.942706 1694879 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:04:50.942798 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:50.963342 1694879 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1007 13:04:50.963383 1694879 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1007 13:04:50.963477 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:50.968861 1694879 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-779469"
	I1007 13:04:50.968905 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:50.969302 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:50.980108 1694879 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1007 13:04:50.983437 1694879 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 13:04:50.983457 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1007 13:04:50.983544 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:51.018171 1694879 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:04:51.029307 1694879 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:04:51.029332 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:04:51.029402 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:51.033441 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 13:04:51.036533 1694879 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1007 13:04:51.036671 1694879 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1007 13:04:51.036725 1694879 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1007 13:04:51.040370 1694879 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1007 13:04:51.040401 1694879 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1007 13:04:51.040485 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:51.042393 1694879 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 13:04:51.042414 1694879 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 13:04:51.042499 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:51.058107 1694879 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1007 13:04:51.058133 1694879 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1007 13:04:51.058214 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:51.070405 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.091428 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.116909 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.121119 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.136475 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.143690 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.171153 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.191926 1694879 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:04:51.192580 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.200250 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.207590 1694879 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1007 13:04:51.210082 1694879 out.go:177]   - Using image docker.io/busybox:stable
	I1007 13:04:51.217688 1694879 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 13:04:51.217708 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1007 13:04:51.217773 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:51.219773 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.235310 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.252346 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.269474 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:51.551105 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:04:51.596821 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 13:04:51.612862 1694879 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1007 13:04:51.612902 1694879 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1007 13:04:51.650333 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1007 13:04:51.652448 1694879 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 13:04:51.652472 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1007 13:04:51.662977 1694879 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1007 13:04:51.663012 1694879 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1007 13:04:51.674748 1694879 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1007 13:04:51.674845 1694879 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1007 13:04:51.689748 1694879 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1007 13:04:51.689829 1694879 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1007 13:04:51.718154 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 13:04:51.736146 1694879 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1007 13:04:51.736237 1694879 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1007 13:04:51.769164 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:04:51.804360 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 13:04:51.808507 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 13:04:51.810761 1694879 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1007 13:04:51.810864 1694879 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1007 13:04:51.857076 1694879 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 13:04:51.857165 1694879 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 13:04:51.897651 1694879 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1007 13:04:51.897747 1694879 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1007 13:04:51.899612 1694879 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1007 13:04:51.899683 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1007 13:04:51.902111 1694879 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1007 13:04:51.902181 1694879 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1007 13:04:51.912897 1694879 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1007 13:04:51.912983 1694879 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1007 13:04:52.022406 1694879 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1007 13:04:52.022481 1694879 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1007 13:04:52.051861 1694879 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:04:52.052167 1694879 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 13:04:52.090936 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1007 13:04:52.094871 1694879 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1007 13:04:52.094968 1694879 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1007 13:04:52.100205 1694879 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1007 13:04:52.100300 1694879 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1007 13:04:52.153029 1694879 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1007 13:04:52.153135 1694879 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1007 13:04:52.227993 1694879 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1007 13:04:52.228014 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1007 13:04:52.246483 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:04:52.275619 1694879 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1007 13:04:52.275693 1694879 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1007 13:04:52.297677 1694879 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1007 13:04:52.297762 1694879 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1007 13:04:52.396928 1694879 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1007 13:04:52.397014 1694879 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1007 13:04:52.443141 1694879 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1007 13:04:52.443224 1694879 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1007 13:04:52.444498 1694879 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1007 13:04:52.444566 1694879 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1007 13:04:52.448461 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1007 13:04:52.579807 1694879 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1007 13:04:52.579900 1694879 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1007 13:04:52.604887 1694879 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 13:04:52.604979 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1007 13:04:52.648781 1694879 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1007 13:04:52.648860 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1007 13:04:52.750138 1694879 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1007 13:04:52.750226 1694879 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1007 13:04:52.763273 1694879 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1007 13:04:52.763362 1694879 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1007 13:04:52.768091 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 13:04:52.894127 1694879 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.702164099s)
	I1007 13:04:52.894491 1694879 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.861019923s)
	I1007 13:04:52.894548 1694879 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1007 13:04:52.896764 1694879 node_ready.go:35] waiting up to 6m0s for node "addons-779469" to be "Ready" ...
	I1007 13:04:52.897760 1694879 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 13:04:52.897819 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1007 13:04:52.916059 1694879 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1007 13:04:52.916082 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1007 13:04:53.081940 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 13:04:53.086619 1694879 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1007 13:04:53.086690 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1007 13:04:53.263939 1694879 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 13:04:53.264017 1694879 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1007 13:04:53.583254 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 13:04:53.771998 1694879 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-779469" context rescaled to 1 replicas
	I1007 13:04:55.059303 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:04:55.861779 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.3106273s)
	I1007 13:04:55.861863 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.265014398s)
	I1007 13:04:55.861901 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.211544932s)
	I1007 13:04:55.977253 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.259013872s)
	I1007 13:04:55.977352 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.208099549s)
	W1007 13:04:56.074591 1694879 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1007 13:04:57.029627 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.225173959s)
	I1007 13:04:57.029659 1694879 addons.go:475] Verifying addon ingress=true in "addons-779469"
	I1007 13:04:57.029729 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.22114876s)
	I1007 13:04:57.029791 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.938697602s)
	I1007 13:04:57.029807 1694879 addons.go:475] Verifying addon registry=true in "addons-779469"
	I1007 13:04:57.030337 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.783823572s)
	I1007 13:04:57.030369 1694879 addons.go:475] Verifying addon metrics-server=true in "addons-779469"
	I1007 13:04:57.030412 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.581864912s)
	I1007 13:04:57.030576 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.262412362s)
	W1007 13:04:57.030605 1694879 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 13:04:57.030626 1694879 retry.go:31] will retry after 270.153464ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 13:04:57.030693 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.948678612s)
	I1007 13:04:57.033755 1694879 out.go:177] * Verifying registry addon...
	I1007 13:04:57.033812 1694879 out.go:177] * Verifying ingress addon...
	I1007 13:04:57.035501 1694879 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-779469 service yakd-dashboard -n yakd-dashboard
	
	I1007 13:04:57.038150 1694879 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1007 13:04:57.039062 1694879 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1007 13:04:57.052949 1694879 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 13:04:57.052974 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:04:57.053134 1694879 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1007 13:04:57.053148 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:04:57.293569 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.710211474s)
	I1007 13:04:57.293603 1694879 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-779469"
	I1007 13:04:57.296464 1694879 out.go:177] * Verifying csi-hostpath-driver addon...
	I1007 13:04:57.299304 1694879 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1007 13:04:57.301679 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 13:04:57.306561 1694879 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 13:04:57.306656 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:04:57.399940 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:04:57.546877 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:04:57.549177 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:04:57.803788 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:04:58.044663 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:04:58.046885 1694879 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1007 13:04:58.047029 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:58.057670 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:04:58.070056 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:58.234274 1694879 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1007 13:04:58.298922 1694879 addons.go:234] Setting addon gcp-auth=true in "addons-779469"
	I1007 13:04:58.298975 1694879 host.go:66] Checking if "addons-779469" exists ...
	I1007 13:04:58.299427 1694879 cli_runner.go:164] Run: docker container inspect addons-779469 --format={{.State.Status}}
	I1007 13:04:58.316559 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:04:58.335804 1694879 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1007 13:04:58.335864 1694879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-779469
	I1007 13:04:58.367948 1694879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38266 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/addons-779469/id_rsa Username:docker}
	I1007 13:04:58.546563 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:04:58.550800 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:04:58.803470 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:04:59.043545 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:04:59.044561 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:04:59.303304 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:04:59.400979 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:04:59.543004 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:04:59.555867 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:04:59.808621 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:00.089609 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:00.090261 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:00.305001 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:00.439883 1694879 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.138140479s)
	I1007 13:05:00.439979 1694879 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.104152918s)
	I1007 13:05:00.445754 1694879 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 13:05:00.452068 1694879 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1007 13:05:00.458762 1694879 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1007 13:05:00.458808 1694879 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1007 13:05:00.487411 1694879 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1007 13:05:00.487435 1694879 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1007 13:05:00.513003 1694879 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 13:05:00.513032 1694879 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1007 13:05:00.538090 1694879 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 13:05:00.553514 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:00.553851 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:00.805732 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:01.049684 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:01.051159 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:01.289707 1694879 addons.go:475] Verifying addon gcp-auth=true in "addons-779469"
	I1007 13:05:01.292464 1694879 out.go:177] * Verifying gcp-auth addon...
	I1007 13:05:01.296779 1694879 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1007 13:05:01.300916 1694879 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1007 13:05:01.300943 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:01.304302 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:01.401066 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:01.558672 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:01.564211 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:01.804295 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:01.808768 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:02.045477 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:02.045570 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:02.300730 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:02.303396 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:02.544625 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:02.547625 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:02.800439 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:02.802754 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:03.041609 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:03.044206 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:03.300770 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:03.302774 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:03.544961 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:03.546048 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:03.802101 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:03.804462 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:03.900764 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:04.042962 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:04.043976 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:04.300224 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:04.302940 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:04.545207 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:04.545586 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:04.799761 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:04.802505 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:05.041277 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:05.042609 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:05.300102 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:05.302444 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:05.546275 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:05.547235 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:05.801632 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:05.803077 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:06.041715 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:06.043711 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:06.300402 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:06.303263 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:06.400272 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:06.545921 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:06.550776 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:06.801226 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:06.803686 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:07.041660 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:07.042642 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:07.301005 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:07.302884 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:07.545011 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:07.546138 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:07.802046 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:07.803737 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:08.041765 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:08.043648 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:08.300615 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:08.303260 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:08.401488 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:08.545479 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:08.547379 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:08.800483 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:08.802910 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:09.041258 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:09.043119 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:09.300571 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:09.302676 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:09.545872 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:09.546795 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:09.800317 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:09.803464 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:10.041525 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:10.043055 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:10.300609 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:10.302947 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:10.546291 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:10.546939 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:10.800326 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:10.802254 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:10.900446 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:11.042061 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:11.043208 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:11.300567 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:11.302590 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:11.544958 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:11.546206 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:11.800090 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:11.803214 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:12.042116 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:12.042786 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:12.300472 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:12.302652 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:12.545847 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:12.546388 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:12.800666 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:12.803106 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:13.042073 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:13.043314 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:13.300663 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:13.304158 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:13.400826 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:13.544786 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:13.546239 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:13.800707 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:13.802498 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:14.041873 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:14.042894 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:14.300834 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:14.303488 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:14.545572 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:14.545708 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:14.800447 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:14.803445 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:15.042987 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:15.043755 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:15.299740 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:15.302379 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:15.546191 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:15.547019 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:15.800260 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:15.802671 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:15.900975 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:16.042824 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:16.043284 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:16.299690 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:16.302218 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:16.544775 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:16.546819 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:16.800220 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:16.802514 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:17.041371 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:17.042985 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:17.300462 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:17.303228 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:17.545330 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:17.545990 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:17.800715 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:17.802955 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:18.041699 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:18.043524 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:18.300255 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:18.302338 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:18.400824 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:18.547350 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:18.550262 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:18.799977 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:18.802545 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:19.042294 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:19.042713 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:19.301200 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:19.303358 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:19.545903 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:19.546810 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:19.800833 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:19.806591 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:20.041832 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:20.043588 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:20.300227 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:20.302924 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:20.546114 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:20.546484 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:20.800217 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:20.802916 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:20.900506 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:21.042773 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:21.043265 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:21.300383 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:21.302739 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:21.544697 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:21.546972 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:21.800418 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:21.802595 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:22.041859 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:22.044794 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:22.300223 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:22.303560 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:22.546194 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:22.548664 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:22.800356 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:22.802402 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:22.900672 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:23.042427 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:23.043380 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:23.300817 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:23.302636 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:23.544901 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:23.545675 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:23.800437 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:23.802791 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:24.042253 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:24.043444 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:24.300415 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:24.302532 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:24.544683 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:24.546501 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:24.800001 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:24.802516 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:24.901010 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:25.042685 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:25.043323 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:25.300036 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:25.302795 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:25.546472 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:25.547309 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:25.802053 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:25.803100 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:26.042253 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:26.044231 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:26.301129 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:26.303452 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:26.546559 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:26.548096 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:26.801507 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:26.805121 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:27.041677 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:27.043158 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:27.303022 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:27.304814 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:27.401033 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:27.547139 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:27.548566 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:27.800311 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:27.802882 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:28.041729 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:28.043171 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:28.303384 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:28.303870 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:28.545579 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:28.548205 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:28.800670 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:28.803275 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:29.041510 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:29.042742 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:29.301038 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:29.303432 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:29.545896 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:29.546304 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:29.800492 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:29.802692 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:29.901381 1694879 node_ready.go:53] node "addons-779469" has status "Ready":"False"
	I1007 13:05:30.045339 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:30.046153 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:30.300637 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:30.302779 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:30.545841 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:30.548586 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:30.800194 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:30.802282 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:31.042402 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:31.043344 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:31.315503 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:31.333430 1694879 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 13:05:31.333455 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:31.406624 1694879 node_ready.go:49] node "addons-779469" has status "Ready":"True"
	I1007 13:05:31.406647 1694879 node_ready.go:38] duration metric: took 38.509715733s for node "addons-779469" to be "Ready" ...
	I1007 13:05:31.406658 1694879 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:05:31.578364 1694879 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kfrdl" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:31.645584 1694879 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 13:05:31.645609 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:31.647176 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:31.826566 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:31.831785 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:32.069335 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:32.070172 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:32.303235 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:32.306024 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:32.545407 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:32.545883 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:32.584429 1694879 pod_ready.go:93] pod "coredns-7c65d6cfc9-kfrdl" in "kube-system" namespace has status "Ready":"True"
	I1007 13:05:32.584455 1694879 pod_ready.go:82] duration metric: took 1.006054269s for pod "coredns-7c65d6cfc9-kfrdl" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:32.584508 1694879 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-779469" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:32.590053 1694879 pod_ready.go:93] pod "etcd-addons-779469" in "kube-system" namespace has status "Ready":"True"
	I1007 13:05:32.590080 1694879 pod_ready.go:82] duration metric: took 5.556159ms for pod "etcd-addons-779469" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:32.590096 1694879 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-779469" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:32.595668 1694879 pod_ready.go:93] pod "kube-apiserver-addons-779469" in "kube-system" namespace has status "Ready":"True"
	I1007 13:05:32.595734 1694879 pod_ready.go:82] duration metric: took 5.602713ms for pod "kube-apiserver-addons-779469" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:32.595762 1694879 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-779469" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:32.600828 1694879 pod_ready.go:93] pod "kube-controller-manager-addons-779469" in "kube-system" namespace has status "Ready":"True"
	I1007 13:05:32.600855 1694879 pod_ready.go:82] duration metric: took 5.071927ms for pod "kube-controller-manager-addons-779469" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:32.600869 1694879 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6ncrf" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:32.606287 1694879 pod_ready.go:93] pod "kube-proxy-6ncrf" in "kube-system" namespace has status "Ready":"True"
	I1007 13:05:32.606315 1694879 pod_ready.go:82] duration metric: took 5.438582ms for pod "kube-proxy-6ncrf" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:32.606326 1694879 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-779469" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:32.801826 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:32.804788 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:33.010217 1694879 pod_ready.go:93] pod "kube-scheduler-addons-779469" in "kube-system" namespace has status "Ready":"True"
	I1007 13:05:33.010305 1694879 pod_ready.go:82] duration metric: took 403.938673ms for pod "kube-scheduler-addons-779469" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:33.010334 1694879 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace to be "Ready" ...
	I1007 13:05:33.043957 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:33.044628 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:33.301019 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:33.304753 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:33.544039 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:33.546203 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:33.803073 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:33.808838 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:34.045254 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:34.051198 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:34.303505 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:34.306577 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:34.552158 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:34.552669 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:34.801308 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:34.805144 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:35.018120 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:35.045113 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:35.046563 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:35.304864 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:35.306894 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:35.547733 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:35.551000 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:35.801618 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:35.806517 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:36.044951 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:36.047266 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:36.301268 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:36.306373 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:36.565705 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:36.566712 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:36.801381 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:36.804800 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:37.018941 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:37.046905 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:37.048469 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:37.301660 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:37.306725 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:37.547257 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:37.550262 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:37.803515 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:37.809117 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:38.051125 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:38.054023 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:38.301806 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:38.304583 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:38.554356 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:38.555293 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:38.800296 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:38.804501 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:39.047807 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:39.048872 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:39.315348 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:39.323400 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:39.517848 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:39.560927 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:39.561480 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:39.801672 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:39.806290 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:40.047044 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:40.047580 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:40.302315 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:40.306916 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:40.552680 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:40.554062 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:40.803340 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:40.807455 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:41.048999 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:41.050209 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:41.302744 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:41.307136 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:41.519128 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:41.565765 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:41.566925 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:41.814133 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:41.826448 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:42.046494 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:42.047431 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:42.303119 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:42.306628 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:42.548565 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:42.549449 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:42.801589 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:42.804406 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:43.041698 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:43.044548 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:43.304594 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:43.306898 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:43.558994 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:43.560582 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:43.801184 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:43.804586 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:44.018311 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:44.047808 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:44.048532 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:44.303942 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:44.305157 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:44.547268 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:44.548484 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:44.804139 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:44.806206 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:45.045064 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:45.045384 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:45.301957 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:45.304681 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:45.545992 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:45.547209 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:45.801361 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:45.804267 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:46.043429 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:46.043647 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:46.302006 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:46.304301 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:46.517238 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:46.568114 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:46.568602 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:46.803432 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:46.806005 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:47.069278 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:47.072336 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:47.301502 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:47.305327 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:47.562800 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:47.564905 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:47.802759 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:47.806307 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:48.045804 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:48.047811 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:48.311121 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:48.314784 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:48.522333 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:48.551458 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:48.552483 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:48.801220 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:48.806178 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:49.045215 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:49.046791 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:49.300894 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:49.306108 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:49.558721 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:49.560008 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:49.801820 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:49.805873 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:50.043839 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:50.050651 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:50.301313 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:50.305374 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:50.555368 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:50.556673 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:50.802062 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:50.805529 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:51.017515 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:51.041918 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:51.044186 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:51.300418 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:51.304958 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:51.568214 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:51.571798 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:51.801604 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:51.806834 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:52.045001 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:52.046589 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:52.301481 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:52.305449 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:52.547091 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:52.547166 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:52.800506 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:52.804046 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:53.044077 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:53.045201 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:53.300993 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:53.304211 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:53.519232 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:53.548325 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:53.549387 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:53.802681 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:53.806153 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:54.049443 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:54.051465 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:54.303710 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:54.308080 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:54.552097 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:54.552927 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:54.801013 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:54.805067 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:55.043731 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:55.044544 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:55.301168 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:55.304037 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:55.546482 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:55.547849 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:55.803309 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:55.809242 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:56.019132 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:56.044911 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:56.047332 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:56.301808 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:56.307917 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:56.554774 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:56.561177 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:56.800580 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:56.804448 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:57.046522 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:57.048058 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:57.300534 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:57.305700 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:57.546707 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:57.548666 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:57.801494 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:57.819455 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:58.050238 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:58.052663 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:58.302091 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:58.305906 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:58.519290 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:05:58.550840 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:58.554151 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:58.813934 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:58.818593 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:59.052953 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:59.054984 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:59.303161 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:59.310007 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:05:59.553797 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:05:59.554614 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:05:59.806650 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:05:59.827378 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:00.051231 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:00.053099 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:00.305774 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:00.310771 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:00.521238 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:00.565208 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:00.565942 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:00.813624 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:00.815069 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:01.044622 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:01.045124 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:01.301193 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:01.306158 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:01.556452 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:01.558063 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:01.805115 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:01.808151 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:02.045354 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:02.054514 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:02.301986 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:02.304348 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:02.544675 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:02.545882 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:02.800720 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:02.804565 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:03.019725 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:03.042699 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:03.044041 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:03.301379 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:03.305105 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:03.545260 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:03.546731 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:03.801953 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:03.804350 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:04.043517 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:04.044375 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:04.301093 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:04.303956 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:04.554425 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:04.554880 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:04.805065 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:04.805741 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:05.024633 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:05.052208 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:05.053295 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:05.302467 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:05.305695 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:05.546753 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:05.548050 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:05.803668 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:05.806876 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:06.046189 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:06.048055 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:06.300613 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:06.304774 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:06.546019 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 13:06:06.547233 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:06.804000 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:06.808030 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:07.045513 1694879 kapi.go:107] duration metric: took 1m10.007361631s to wait for kubernetes.io/minikube-addons=registry ...
	I1007 13:06:07.046893 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:07.300266 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:07.303512 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:07.516036 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:07.546767 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:07.800772 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:07.805116 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:08.044618 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:08.302749 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:08.307823 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:08.546150 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:08.807946 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:08.808377 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:09.044094 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:09.301249 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:09.306110 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:09.519012 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:09.558361 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:09.810049 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:09.812263 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:10.047818 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:10.301937 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:10.305912 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:10.546099 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:10.801862 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:10.803981 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:11.047000 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:11.300650 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:11.305293 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:11.544164 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:11.806601 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:11.808029 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:12.020770 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:12.043425 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:12.300788 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:12.303961 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:12.545686 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:12.800491 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:12.803875 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:13.052927 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:13.301224 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:13.306063 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:13.547654 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:13.800946 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:13.805921 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:14.045218 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:14.301103 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:14.303913 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:14.517264 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:14.553621 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:14.804611 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:14.807876 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:15.045559 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:15.301248 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:15.304446 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:15.547372 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:15.801300 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:15.804397 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:16.044837 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:16.301074 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:16.304241 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:16.519260 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:16.545362 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:16.810486 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:16.811282 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:17.050369 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:17.300838 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:17.303936 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:17.547506 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:17.801804 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:17.806742 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:18.049844 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:18.301481 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:18.303950 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:18.547291 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:18.801300 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:18.805354 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:19.016605 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:19.044663 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:19.304560 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:19.307483 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:19.546626 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:19.800651 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:19.804200 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:20.044778 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:20.299976 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:20.304213 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:20.546086 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:20.806162 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:20.807861 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:21.018300 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:21.043856 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:21.300143 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:21.303913 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:21.550856 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:21.801803 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:21.806419 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:22.050724 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:22.301442 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:22.305383 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:22.547350 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:22.801439 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:22.804910 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:23.043098 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:23.301506 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:23.304158 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:23.517214 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:23.546017 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:23.802129 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:23.805663 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:24.043911 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:24.300689 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:24.304259 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:24.547690 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:24.800334 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:24.804402 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:25.044149 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:25.301362 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:25.304391 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:25.545010 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:25.801619 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:25.804667 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:26.027857 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:26.045762 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:26.301584 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:26.305382 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:26.545061 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:26.808846 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:26.809271 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:27.047166 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:27.300696 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:27.303794 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:27.543475 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:27.800216 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:27.803691 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:28.044996 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:28.301012 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:28.305378 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:28.522630 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:28.544834 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:28.809607 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:28.810797 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:29.044130 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:29.303099 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:29.304531 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:29.546927 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:29.801642 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:29.804565 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:30.046589 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:30.301518 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:30.304782 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:30.553412 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:30.802267 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:30.806077 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:31.016566 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:31.044029 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:31.302524 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:31.307648 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:31.545402 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:31.801180 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:31.804363 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 13:06:32.043196 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:32.300764 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:32.304013 1694879 kapi.go:107] duration metric: took 1m35.004707839s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1007 13:06:32.545169 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:32.801369 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:33.017511 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:33.044722 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:33.300184 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:33.544841 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:33.800845 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:34.044446 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:34.301180 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:34.545054 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:34.800935 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:35.018137 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:35.043833 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:35.300633 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:35.546660 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:35.801005 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:36.043805 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:36.300718 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:36.545814 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:36.801633 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:37.020064 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:37.044318 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:37.301171 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:37.543913 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:37.801805 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:38.052692 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:38.300730 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:38.555749 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:38.800844 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:39.045595 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:39.300064 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:39.517445 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:39.545028 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:39.801584 1694879 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 13:06:40.045171 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:40.300803 1694879 kapi.go:107] duration metric: took 1m39.004023429s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1007 13:06:40.303670 1694879 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-779469 cluster.
	I1007 13:06:40.306006 1694879 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1007 13:06:40.308200 1694879 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1007 13:06:40.557378 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:41.044771 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:41.519547 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:41.552889 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:42.052596 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:42.544140 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:43.051470 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:43.551924 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:44.016827 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:44.044217 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:44.548629 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:45.045645 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:45.556513 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:46.036352 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:46.044458 1694879 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 13:06:46.545616 1694879 kapi.go:107] duration metric: took 1m49.506552077s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1007 13:06:46.548548 1694879 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, cloud-spanner, default-storageclass, ingress-dns, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1007 13:06:46.551303 1694879 addons.go:510] duration metric: took 1m55.974106905s for enable addons: enabled=[storage-provisioner nvidia-device-plugin cloud-spanner default-storageclass ingress-dns metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1007 13:06:48.516436 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:51.021453 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:53.517374 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:56.017284 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:06:58.516670 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:07:00.517627 1694879 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"False"
	I1007 13:07:01.554977 1694879 pod_ready.go:93] pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace has status "Ready":"True"
	I1007 13:07:01.555009 1694879 pod_ready.go:82] duration metric: took 1m28.544651449s for pod "metrics-server-84c5f94fbc-zhbq5" in "kube-system" namespace to be "Ready" ...
	I1007 13:07:01.555025 1694879 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-mgxtx" in "kube-system" namespace to be "Ready" ...
	I1007 13:07:01.562427 1694879 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-mgxtx" in "kube-system" namespace has status "Ready":"True"
	I1007 13:07:01.562455 1694879 pod_ready.go:82] duration metric: took 7.420344ms for pod "nvidia-device-plugin-daemonset-mgxtx" in "kube-system" namespace to be "Ready" ...
	I1007 13:07:01.562477 1694879 pod_ready.go:39] duration metric: took 1m30.155806852s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:07:01.562519 1694879 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:07:01.562594 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:07:01.562686 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:07:01.636816 1694879 cri.go:89] found id: "b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b"
	I1007 13:07:01.636892 1694879 cri.go:89] found id: ""
	I1007 13:07:01.636919 1694879 logs.go:282] 1 containers: [b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b]
	I1007 13:07:01.636977 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:01.641530 1694879 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:07:01.641614 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:07:01.685865 1694879 cri.go:89] found id: "c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596"
	I1007 13:07:01.685891 1694879 cri.go:89] found id: ""
	I1007 13:07:01.685901 1694879 logs.go:282] 1 containers: [c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596]
	I1007 13:07:01.685984 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:01.689900 1694879 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:07:01.690019 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:07:01.735298 1694879 cri.go:89] found id: "be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d"
	I1007 13:07:01.735392 1694879 cri.go:89] found id: ""
	I1007 13:07:01.735401 1694879 logs.go:282] 1 containers: [be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d]
	I1007 13:07:01.735473 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:01.739444 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:07:01.739560 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:07:01.802326 1694879 cri.go:89] found id: "2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa"
	I1007 13:07:01.802412 1694879 cri.go:89] found id: ""
	I1007 13:07:01.802437 1694879 logs.go:282] 1 containers: [2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa]
	I1007 13:07:01.802537 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:01.808400 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:07:01.808585 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:07:01.876985 1694879 cri.go:89] found id: "24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680"
	I1007 13:07:01.877064 1694879 cri.go:89] found id: ""
	I1007 13:07:01.877092 1694879 logs.go:282] 1 containers: [24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680]
	I1007 13:07:01.877192 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:01.889084 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:07:01.889225 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:07:01.949358 1694879 cri.go:89] found id: "e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d"
	I1007 13:07:01.949445 1694879 cri.go:89] found id: ""
	I1007 13:07:01.949475 1694879 logs.go:282] 1 containers: [e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d]
	I1007 13:07:01.949597 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:01.955161 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:07:01.955243 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:07:01.999371 1694879 cri.go:89] found id: "f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd"
	I1007 13:07:01.999397 1694879 cri.go:89] found id: ""
	I1007 13:07:01.999406 1694879 logs.go:282] 1 containers: [f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd]
	I1007 13:07:01.999466 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:02.004588 1694879 logs.go:123] Gathering logs for dmesg ...
	I1007 13:07:02.004706 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:07:02.025290 1694879 logs.go:123] Gathering logs for kube-apiserver [b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b] ...
	I1007 13:07:02.025330 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b"
	I1007 13:07:02.088297 1694879 logs.go:123] Gathering logs for kube-scheduler [2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa] ...
	I1007 13:07:02.088336 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa"
	I1007 13:07:02.141863 1694879 logs.go:123] Gathering logs for kube-proxy [24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680] ...
	I1007 13:07:02.141899 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680"
	I1007 13:07:02.181758 1694879 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:07:02.181789 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:07:02.284862 1694879 logs.go:123] Gathering logs for container status ...
	I1007 13:07:02.284901 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:07:02.345618 1694879 logs.go:123] Gathering logs for kubelet ...
	I1007 13:07:02.345663 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:07:02.456779 1694879 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:07:02.456815 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 13:07:02.649199 1694879 logs.go:123] Gathering logs for etcd [c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596] ...
	I1007 13:07:02.649232 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596"
	I1007 13:07:02.700881 1694879 logs.go:123] Gathering logs for coredns [be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d] ...
	I1007 13:07:02.700915 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d"
	I1007 13:07:02.745922 1694879 logs.go:123] Gathering logs for kube-controller-manager [e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d] ...
	I1007 13:07:02.745956 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d"
	I1007 13:07:02.820534 1694879 logs.go:123] Gathering logs for kindnet [f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd] ...
	I1007 13:07:02.820632 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd"
	I1007 13:07:05.361896 1694879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:07:05.375854 1694879 api_server.go:72] duration metric: took 2m14.799038512s to wait for apiserver process to appear ...
	I1007 13:07:05.375889 1694879 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:07:05.375940 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:07:05.376012 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:07:05.417609 1694879 cri.go:89] found id: "b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b"
	I1007 13:07:05.417634 1694879 cri.go:89] found id: ""
	I1007 13:07:05.417643 1694879 logs.go:282] 1 containers: [b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b]
	I1007 13:07:05.417701 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:05.421384 1694879 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:07:05.421454 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:07:05.458868 1694879 cri.go:89] found id: "c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596"
	I1007 13:07:05.458893 1694879 cri.go:89] found id: ""
	I1007 13:07:05.458902 1694879 logs.go:282] 1 containers: [c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596]
	I1007 13:07:05.458958 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:05.462476 1694879 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:07:05.462549 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:07:05.500301 1694879 cri.go:89] found id: "be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d"
	I1007 13:07:05.500324 1694879 cri.go:89] found id: ""
	I1007 13:07:05.500337 1694879 logs.go:282] 1 containers: [be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d]
	I1007 13:07:05.500392 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:05.503989 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:07:05.504067 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:07:05.548036 1694879 cri.go:89] found id: "2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa"
	I1007 13:07:05.548059 1694879 cri.go:89] found id: ""
	I1007 13:07:05.548066 1694879 logs.go:282] 1 containers: [2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa]
	I1007 13:07:05.548179 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:05.552691 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:07:05.552766 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:07:05.591012 1694879 cri.go:89] found id: "24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680"
	I1007 13:07:05.591034 1694879 cri.go:89] found id: ""
	I1007 13:07:05.591042 1694879 logs.go:282] 1 containers: [24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680]
	I1007 13:07:05.591099 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:05.594535 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:07:05.594605 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:07:05.633759 1694879 cri.go:89] found id: "e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d"
	I1007 13:07:05.633782 1694879 cri.go:89] found id: ""
	I1007 13:07:05.633790 1694879 logs.go:282] 1 containers: [e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d]
	I1007 13:07:05.633851 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:05.637362 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:07:05.637434 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:07:05.674027 1694879 cri.go:89] found id: "f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd"
	I1007 13:07:05.674050 1694879 cri.go:89] found id: ""
	I1007 13:07:05.674058 1694879 logs.go:282] 1 containers: [f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd]
	I1007 13:07:05.674112 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:05.677736 1694879 logs.go:123] Gathering logs for dmesg ...
	I1007 13:07:05.677763 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:07:05.694714 1694879 logs.go:123] Gathering logs for coredns [be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d] ...
	I1007 13:07:05.694744 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d"
	I1007 13:07:05.742070 1694879 logs.go:123] Gathering logs for kube-scheduler [2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa] ...
	I1007 13:07:05.742101 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa"
	I1007 13:07:05.788710 1694879 logs.go:123] Gathering logs for kube-proxy [24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680] ...
	I1007 13:07:05.788742 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680"
	I1007 13:07:05.832263 1694879 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:07:05.832291 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:07:05.926099 1694879 logs.go:123] Gathering logs for container status ...
	I1007 13:07:05.926140 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:07:05.992863 1694879 logs.go:123] Gathering logs for kubelet ...
	I1007 13:07:05.992899 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:07:06.110371 1694879 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:07:06.110412 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 13:07:06.258372 1694879 logs.go:123] Gathering logs for kube-apiserver [b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b] ...
	I1007 13:07:06.258404 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b"
	I1007 13:07:06.316897 1694879 logs.go:123] Gathering logs for etcd [c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596] ...
	I1007 13:07:06.316938 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596"
	I1007 13:07:06.372349 1694879 logs.go:123] Gathering logs for kube-controller-manager [e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d] ...
	I1007 13:07:06.372379 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d"
	I1007 13:07:06.439905 1694879 logs.go:123] Gathering logs for kindnet [f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd] ...
	I1007 13:07:06.439944 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd"
	I1007 13:07:08.987296 1694879 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:07:08.995108 1694879 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1007 13:07:08.996143 1694879 api_server.go:141] control plane version: v1.31.1
	I1007 13:07:08.996174 1694879 api_server.go:131] duration metric: took 3.620276222s to wait for apiserver health ...
	I1007 13:07:08.996183 1694879 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:07:08.996206 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:07:08.996274 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:07:09.041738 1694879 cri.go:89] found id: "b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b"
	I1007 13:07:09.041759 1694879 cri.go:89] found id: ""
	I1007 13:07:09.041767 1694879 logs.go:282] 1 containers: [b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b]
	I1007 13:07:09.041855 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:09.045416 1694879 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:07:09.045491 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:07:09.086865 1694879 cri.go:89] found id: "c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596"
	I1007 13:07:09.086936 1694879 cri.go:89] found id: ""
	I1007 13:07:09.086958 1694879 logs.go:282] 1 containers: [c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596]
	I1007 13:07:09.087053 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:09.091107 1694879 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:07:09.091245 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:07:09.131475 1694879 cri.go:89] found id: "be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d"
	I1007 13:07:09.131570 1694879 cri.go:89] found id: ""
	I1007 13:07:09.131595 1694879 logs.go:282] 1 containers: [be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d]
	I1007 13:07:09.131671 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:09.135811 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:07:09.135944 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:07:09.174730 1694879 cri.go:89] found id: "2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa"
	I1007 13:07:09.174752 1694879 cri.go:89] found id: ""
	I1007 13:07:09.174766 1694879 logs.go:282] 1 containers: [2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa]
	I1007 13:07:09.174826 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:09.178945 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:07:09.179023 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:07:09.218036 1694879 cri.go:89] found id: "24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680"
	I1007 13:07:09.218059 1694879 cri.go:89] found id: ""
	I1007 13:07:09.218066 1694879 logs.go:282] 1 containers: [24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680]
	I1007 13:07:09.218134 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:09.221902 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:07:09.221982 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:07:09.260955 1694879 cri.go:89] found id: "e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d"
	I1007 13:07:09.261029 1694879 cri.go:89] found id: ""
	I1007 13:07:09.261052 1694879 logs.go:282] 1 containers: [e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d]
	I1007 13:07:09.261149 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:09.265100 1694879 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:07:09.265176 1694879 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:07:09.303621 1694879 cri.go:89] found id: "f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd"
	I1007 13:07:09.303645 1694879 cri.go:89] found id: ""
	I1007 13:07:09.303654 1694879 logs.go:282] 1 containers: [f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd]
	I1007 13:07:09.303711 1694879 ssh_runner.go:195] Run: which crictl
	I1007 13:07:09.307406 1694879 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:07:09.307434 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:07:09.402973 1694879 logs.go:123] Gathering logs for kubelet ...
	I1007 13:07:09.403008 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:07:09.508633 1694879 logs.go:123] Gathering logs for dmesg ...
	I1007 13:07:09.508671 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:07:09.526115 1694879 logs.go:123] Gathering logs for etcd [c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596] ...
	I1007 13:07:09.526146 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596"
	I1007 13:07:09.586886 1694879 logs.go:123] Gathering logs for coredns [be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d] ...
	I1007 13:07:09.586917 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d"
	I1007 13:07:09.625948 1694879 logs.go:123] Gathering logs for kube-scheduler [2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa] ...
	I1007 13:07:09.625977 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa"
	I1007 13:07:09.670140 1694879 logs.go:123] Gathering logs for kindnet [f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd] ...
	I1007 13:07:09.670171 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd"
	I1007 13:07:09.717533 1694879 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:07:09.717560 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 13:07:09.862438 1694879 logs.go:123] Gathering logs for kube-apiserver [b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b] ...
	I1007 13:07:09.862470 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b"
	I1007 13:07:09.918608 1694879 logs.go:123] Gathering logs for kube-proxy [24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680] ...
	I1007 13:07:09.918641 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680"
	I1007 13:07:09.960260 1694879 logs.go:123] Gathering logs for kube-controller-manager [e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d] ...
	I1007 13:07:09.960291 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d"
	I1007 13:07:10.048353 1694879 logs.go:123] Gathering logs for container status ...
	I1007 13:07:10.048391 1694879 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:07:12.619477 1694879 system_pods.go:59] 18 kube-system pods found
	I1007 13:07:12.619972 1694879 system_pods.go:61] "coredns-7c65d6cfc9-kfrdl" [14d3df12-3c3d-42c8-aa8c-b4df3c618109] Running
	I1007 13:07:12.620006 1694879 system_pods.go:61] "csi-hostpath-attacher-0" [ff752214-ae2a-4f9c-961b-c35b8e8ba378] Running
	I1007 13:07:12.620019 1694879 system_pods.go:61] "csi-hostpath-resizer-0" [e7ae9420-a05a-4b44-9fa3-4ed00911fdb6] Running
	I1007 13:07:12.620031 1694879 system_pods.go:61] "csi-hostpathplugin-zkm7b" [3c568c8f-d491-46a6-b174-813f2ebcb2db] Running
	I1007 13:07:12.620044 1694879 system_pods.go:61] "etcd-addons-779469" [b9acbc51-2544-4ede-9914-b047804d4588] Running
	I1007 13:07:12.620050 1694879 system_pods.go:61] "kindnet-7g5zx" [1fbe4b22-9d49-433e-a471-d43e712fac98] Running
	I1007 13:07:12.620060 1694879 system_pods.go:61] "kube-apiserver-addons-779469" [47acf6d3-9a8b-4f39-a33b-3597a6552c9d] Running
	I1007 13:07:12.620064 1694879 system_pods.go:61] "kube-controller-manager-addons-779469" [f50b4a30-f444-4092-a7aa-89de7f71f64c] Running
	I1007 13:07:12.620075 1694879 system_pods.go:61] "kube-ingress-dns-minikube" [a86273b1-4cac-4662-930e-44ffe2fcc91f] Running
	I1007 13:07:12.620084 1694879 system_pods.go:61] "kube-proxy-6ncrf" [b8ff1258-fb1b-4c1c-ad5f-039e47f37a2a] Running
	I1007 13:07:12.620089 1694879 system_pods.go:61] "kube-scheduler-addons-779469" [ba19f222-1069-45d1-9e3e-2a085a065db6] Running
	I1007 13:07:12.620093 1694879 system_pods.go:61] "metrics-server-84c5f94fbc-zhbq5" [aadc85ae-34d8-46da-8c72-e453e7246ef7] Running
	I1007 13:07:12.620097 1694879 system_pods.go:61] "nvidia-device-plugin-daemonset-mgxtx" [981684ce-573b-4c82-a5d9-19d8c41421ce] Running
	I1007 13:07:12.620104 1694879 system_pods.go:61] "registry-66c9cd494c-b8457" [37368b21-bd4d-4d7c-b2ee-31f62690e0b7] Running
	I1007 13:07:12.620109 1694879 system_pods.go:61] "registry-proxy-p4tjk" [7f540d5b-5976-4e89-b2f2-c934d659d3f3] Running
	I1007 13:07:12.620121 1694879 system_pods.go:61] "snapshot-controller-56fcc65765-dzq9x" [eb3418ae-d06e-4798-ab91-395da46f8aa0] Running
	I1007 13:07:12.620125 1694879 system_pods.go:61] "snapshot-controller-56fcc65765-zqkd5" [67b9d86f-dbed-4441-929d-1cc25f4c2d59] Running
	I1007 13:07:12.620136 1694879 system_pods.go:61] "storage-provisioner" [9832c3db-5664-45e0-8be0-4521d011f68b] Running
	I1007 13:07:12.620147 1694879 system_pods.go:74] duration metric: took 3.623953566s to wait for pod list to return data ...
	I1007 13:07:12.620160 1694879 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:07:12.622986 1694879 default_sa.go:45] found service account: "default"
	I1007 13:07:12.623011 1694879 default_sa.go:55] duration metric: took 2.837203ms for default service account to be created ...
	I1007 13:07:12.623020 1694879 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:07:12.632932 1694879 system_pods.go:86] 18 kube-system pods found
	I1007 13:07:12.632966 1694879 system_pods.go:89] "coredns-7c65d6cfc9-kfrdl" [14d3df12-3c3d-42c8-aa8c-b4df3c618109] Running
	I1007 13:07:12.632974 1694879 system_pods.go:89] "csi-hostpath-attacher-0" [ff752214-ae2a-4f9c-961b-c35b8e8ba378] Running
	I1007 13:07:12.632980 1694879 system_pods.go:89] "csi-hostpath-resizer-0" [e7ae9420-a05a-4b44-9fa3-4ed00911fdb6] Running
	I1007 13:07:12.632985 1694879 system_pods.go:89] "csi-hostpathplugin-zkm7b" [3c568c8f-d491-46a6-b174-813f2ebcb2db] Running
	I1007 13:07:12.632990 1694879 system_pods.go:89] "etcd-addons-779469" [b9acbc51-2544-4ede-9914-b047804d4588] Running
	I1007 13:07:12.632995 1694879 system_pods.go:89] "kindnet-7g5zx" [1fbe4b22-9d49-433e-a471-d43e712fac98] Running
	I1007 13:07:12.632999 1694879 system_pods.go:89] "kube-apiserver-addons-779469" [47acf6d3-9a8b-4f39-a33b-3597a6552c9d] Running
	I1007 13:07:12.633004 1694879 system_pods.go:89] "kube-controller-manager-addons-779469" [f50b4a30-f444-4092-a7aa-89de7f71f64c] Running
	I1007 13:07:12.633008 1694879 system_pods.go:89] "kube-ingress-dns-minikube" [a86273b1-4cac-4662-930e-44ffe2fcc91f] Running
	I1007 13:07:12.633018 1694879 system_pods.go:89] "kube-proxy-6ncrf" [b8ff1258-fb1b-4c1c-ad5f-039e47f37a2a] Running
	I1007 13:07:12.633023 1694879 system_pods.go:89] "kube-scheduler-addons-779469" [ba19f222-1069-45d1-9e3e-2a085a065db6] Running
	I1007 13:07:12.633033 1694879 system_pods.go:89] "metrics-server-84c5f94fbc-zhbq5" [aadc85ae-34d8-46da-8c72-e453e7246ef7] Running
	I1007 13:07:12.633038 1694879 system_pods.go:89] "nvidia-device-plugin-daemonset-mgxtx" [981684ce-573b-4c82-a5d9-19d8c41421ce] Running
	I1007 13:07:12.633044 1694879 system_pods.go:89] "registry-66c9cd494c-b8457" [37368b21-bd4d-4d7c-b2ee-31f62690e0b7] Running
	I1007 13:07:12.633051 1694879 system_pods.go:89] "registry-proxy-p4tjk" [7f540d5b-5976-4e89-b2f2-c934d659d3f3] Running
	I1007 13:07:12.633055 1694879 system_pods.go:89] "snapshot-controller-56fcc65765-dzq9x" [eb3418ae-d06e-4798-ab91-395da46f8aa0] Running
	I1007 13:07:12.633059 1694879 system_pods.go:89] "snapshot-controller-56fcc65765-zqkd5" [67b9d86f-dbed-4441-929d-1cc25f4c2d59] Running
	I1007 13:07:12.633063 1694879 system_pods.go:89] "storage-provisioner" [9832c3db-5664-45e0-8be0-4521d011f68b] Running
	I1007 13:07:12.633077 1694879 system_pods.go:126] duration metric: took 10.050502ms to wait for k8s-apps to be running ...
	I1007 13:07:12.633101 1694879 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:07:12.633165 1694879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:07:12.645370 1694879 system_svc.go:56] duration metric: took 12.259666ms WaitForService to wait for kubelet
	I1007 13:07:12.645396 1694879 kubeadm.go:582] duration metric: took 2m22.068585334s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:07:12.645417 1694879 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:07:12.649188 1694879 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 13:07:12.649221 1694879 node_conditions.go:123] node cpu capacity is 2
	I1007 13:07:12.649233 1694879 node_conditions.go:105] duration metric: took 3.811104ms to run NodePressure ...
	I1007 13:07:12.649246 1694879 start.go:241] waiting for startup goroutines ...
	I1007 13:07:12.649253 1694879 start.go:246] waiting for cluster config update ...
	I1007 13:07:12.649268 1694879 start.go:255] writing updated cluster config ...
	I1007 13:07:12.649573 1694879 ssh_runner.go:195] Run: rm -f paused
	I1007 13:07:13.000900 1694879 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:07:13.006700 1694879 out.go:177] * Done! kubectl is now configured to use "addons-779469" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 13:19:45 addons-779469 crio[968]: time="2024-10-07 13:19:45.371634822Z" level=info msg="Stopped pod sandbox (already stopped): 90be504daff12b8a6f8a41338937e1c7919fb62ade9ac8c7fe4fbf9bbb93d149" id=39bb2af8-810f-470a-88d7-6ba3e0687fc0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 13:19:45 addons-779469 crio[968]: time="2024-10-07 13:19:45.371955446Z" level=info msg="Removing pod sandbox: 90be504daff12b8a6f8a41338937e1c7919fb62ade9ac8c7fe4fbf9bbb93d149" id=41d80196-8f17-459f-a032-151b3d41b0cc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 07 13:19:45 addons-779469 crio[968]: time="2024-10-07 13:19:45.383459945Z" level=info msg="Removed pod sandbox: 90be504daff12b8a6f8a41338937e1c7919fb62ade9ac8c7fe4fbf9bbb93d149" id=41d80196-8f17-459f-a032-151b3d41b0cc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 07 13:19:45 addons-779469 crio[968]: time="2024-10-07 13:19:45.384162188Z" level=info msg="Stopping pod sandbox: fd24691374db4ca18fd923443c69a64c3d140d4e9b05341b4b72725959200197" id=cd60f10e-9e32-4920-a06d-c8e35ab95e7f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 13:19:45 addons-779469 crio[968]: time="2024-10-07 13:19:45.384280446Z" level=info msg="Stopped pod sandbox (already stopped): fd24691374db4ca18fd923443c69a64c3d140d4e9b05341b4b72725959200197" id=cd60f10e-9e32-4920-a06d-c8e35ab95e7f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 13:19:45 addons-779469 crio[968]: time="2024-10-07 13:19:45.385237132Z" level=info msg="Removing pod sandbox: fd24691374db4ca18fd923443c69a64c3d140d4e9b05341b4b72725959200197" id=00def7a0-02e8-42fc-a705-342e31eb88f8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 07 13:19:45 addons-779469 crio[968]: time="2024-10-07 13:19:45.396210157Z" level=info msg="Removed pod sandbox: fd24691374db4ca18fd923443c69a64c3d140d4e9b05341b4b72725959200197" id=00def7a0-02e8-42fc-a705-342e31eb88f8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 07 13:19:45 addons-779469 crio[968]: time="2024-10-07 13:19:45.864377138Z" level=warning msg="Stopping container 26c6bfdc08fd35a6dd89e4bce5910d7fbdd85139ecbec234e03ee1495c8867b3 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=c4aa400a-bd6e-4d5c-af75-a6d85e609b6d name=/runtime.v1.RuntimeService/StopContainer
	Oct 07 13:19:45 addons-779469 conmon[5684]: conmon 26c6bfdc08fd35a6dd89 <ninfo>: container 5695 exited with status 137
	Oct 07 13:19:46 addons-779469 crio[968]: time="2024-10-07 13:19:46.002029836Z" level=info msg="Stopped container 26c6bfdc08fd35a6dd89e4bce5910d7fbdd85139ecbec234e03ee1495c8867b3: ingress-nginx/ingress-nginx-controller-bc57996ff-nlxnm/controller" id=c4aa400a-bd6e-4d5c-af75-a6d85e609b6d name=/runtime.v1.RuntimeService/StopContainer
	Oct 07 13:19:46 addons-779469 crio[968]: time="2024-10-07 13:19:46.002933158Z" level=info msg="Stopping pod sandbox: cbfa9a000b43fa0f1fc0c434f361aff1b67b701d040d1ee28a038ed64e90eb2c" id=0a383fc9-768c-41bd-a9fc-1296b6327184 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 13:19:46 addons-779469 crio[968]: time="2024-10-07 13:19:46.008267969Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-SOFX6IVKRRNKJM3W - [0:0]\n:KUBE-HP-WPOXEEVSE2MV2XV4 - [0:0]\n-X KUBE-HP-WPOXEEVSE2MV2XV4\n-X KUBE-HP-SOFX6IVKRRNKJM3W\nCOMMIT\n"
	Oct 07 13:19:46 addons-779469 crio[968]: time="2024-10-07 13:19:46.009805942Z" level=info msg="Closing host port tcp:80"
	Oct 07 13:19:46 addons-779469 crio[968]: time="2024-10-07 13:19:46.009865936Z" level=info msg="Closing host port tcp:443"
	Oct 07 13:19:46 addons-779469 crio[968]: time="2024-10-07 13:19:46.011311152Z" level=info msg="Host port tcp:80 does not have an open socket"
	Oct 07 13:19:46 addons-779469 crio[968]: time="2024-10-07 13:19:46.011344850Z" level=info msg="Host port tcp:443 does not have an open socket"
	Oct 07 13:19:46 addons-779469 crio[968]: time="2024-10-07 13:19:46.011574178Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-nlxnm Namespace:ingress-nginx ID:cbfa9a000b43fa0f1fc0c434f361aff1b67b701d040d1ee28a038ed64e90eb2c UID:664248ba-5384-463e-98d4-c3151a40f7db NetNS:/var/run/netns/732631e4-4ac0-4a28-b2d0-8c0ed5ae8f7c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 07 13:19:46 addons-779469 crio[968]: time="2024-10-07 13:19:46.011742658Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-nlxnm from CNI network \"kindnet\" (type=ptp)"
	Oct 07 13:19:46 addons-779469 crio[968]: time="2024-10-07 13:19:46.039892601Z" level=info msg="Stopped pod sandbox: cbfa9a000b43fa0f1fc0c434f361aff1b67b701d040d1ee28a038ed64e90eb2c" id=0a383fc9-768c-41bd-a9fc-1296b6327184 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 13:19:46 addons-779469 crio[968]: time="2024-10-07 13:19:46.133507170Z" level=info msg="Removing container: 26c6bfdc08fd35a6dd89e4bce5910d7fbdd85139ecbec234e03ee1495c8867b3" id=5c08b678-5dfb-4aaf-b56e-a898c82484de name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 07 13:19:46 addons-779469 crio[968]: time="2024-10-07 13:19:46.149910101Z" level=info msg="Removed container 26c6bfdc08fd35a6dd89e4bce5910d7fbdd85139ecbec234e03ee1495c8867b3: ingress-nginx/ingress-nginx-controller-bc57996ff-nlxnm/controller" id=5c08b678-5dfb-4aaf-b56e-a898c82484de name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 07 13:20:45 addons-779469 crio[968]: time="2024-10-07 13:20:45.399112024Z" level=info msg="Stopping pod sandbox: cbfa9a000b43fa0f1fc0c434f361aff1b67b701d040d1ee28a038ed64e90eb2c" id=41b9e89d-058e-4725-b2d2-77ce684a68a3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 13:20:45 addons-779469 crio[968]: time="2024-10-07 13:20:45.399158374Z" level=info msg="Stopped pod sandbox (already stopped): cbfa9a000b43fa0f1fc0c434f361aff1b67b701d040d1ee28a038ed64e90eb2c" id=41b9e89d-058e-4725-b2d2-77ce684a68a3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 13:20:45 addons-779469 crio[968]: time="2024-10-07 13:20:45.399503875Z" level=info msg="Removing pod sandbox: cbfa9a000b43fa0f1fc0c434f361aff1b67b701d040d1ee28a038ed64e90eb2c" id=361a5253-dc66-4fdc-bd62-6e6e6a179285 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 07 13:20:45 addons-779469 crio[968]: time="2024-10-07 13:20:45.408692582Z" level=info msg="Removed pod sandbox: cbfa9a000b43fa0f1fc0c434f361aff1b67b701d040d1ee28a038ed64e90eb2c" id=361a5253-dc66-4fdc-bd62-6e6e6a179285 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f03a17aca82ab       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   About a minute ago   Running             hello-world-app           0                   8d03b82f6f307       hello-world-app-55bf9c44b4-nkjm2
	7f30df4757c5e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     3 minutes ago        Running             busybox                   0                   fd684e037cfa4       busybox
	c42b6b433c54a       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         4 minutes ago        Running             nginx                     0                   d3b9e9b83b78e       nginx
	9bfab86c6a487       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98        15 minutes ago       Running             local-path-provisioner    0                   35b2b92b491a2       local-path-provisioner-86d989889c-rrhx5
	fc0b148e46b99       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   15 minutes ago       Running             metrics-server            0                   ae537516a580d       metrics-server-84c5f94fbc-zhbq5
	457b9e07e729d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        16 minutes ago       Running             storage-provisioner       0                   71893a9e25358       storage-provisioner
	be3a55f354462       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                        16 minutes ago       Running             coredns                   0                   6ff1e68c7ffac       coredns-7c65d6cfc9-kfrdl
	f5c08bdd49644       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                        16 minutes ago       Running             kindnet-cni               0                   27d28a7719dbe       kindnet-7g5zx
	24b2cc84e135f       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                        16 minutes ago       Running             kube-proxy                0                   32f4c8a9cf354       kube-proxy-6ncrf
	e48b3531357e8       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                        16 minutes ago       Running             kube-controller-manager   0                   39515888518c6       kube-controller-manager-addons-779469
	2e2a39495c277       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                        16 minutes ago       Running             kube-scheduler            0                   ee8de6b60a689       kube-scheduler-addons-779469
	b8cf421e0e643       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                        16 minutes ago       Running             kube-apiserver            0                   93c97f093c738       kube-apiserver-addons-779469
	c0d2a0e8c63b6       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        16 minutes ago       Running             etcd                      0                   8da69e060c1ec       etcd-addons-779469
	
	
	==> coredns [be3a55f3544621da090cd3870c2f984590c337d70096d88efaf4568dc6284c6d] <==
	[INFO] 10.244.0.19:52473 - 33544 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042715s
	[INFO] 10.244.0.19:52473 - 19894 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001333604s
	[INFO] 10.244.0.19:43576 - 52703 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002329199s
	[INFO] 10.244.0.19:43576 - 41197 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001857562s
	[INFO] 10.244.0.19:52473 - 4498 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.0020538s
	[INFO] 10.244.0.19:52473 - 1486 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000108699s
	[INFO] 10.244.0.19:43576 - 45600 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049311s
	[INFO] 10.244.0.19:36523 - 26035 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000136826s
	[INFO] 10.244.0.19:48366 - 43525 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000049845s
	[INFO] 10.244.0.19:36523 - 55094 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000091822s
	[INFO] 10.244.0.19:48366 - 11603 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000044036s
	[INFO] 10.244.0.19:48366 - 28540 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00006349s
	[INFO] 10.244.0.19:36523 - 1938 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000195507s
	[INFO] 10.244.0.19:36523 - 920 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000114007s
	[INFO] 10.244.0.19:48366 - 33819 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000104186s
	[INFO] 10.244.0.19:48366 - 33987 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058042s
	[INFO] 10.244.0.19:36523 - 40604 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000088425s
	[INFO] 10.244.0.19:36523 - 61001 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000107772s
	[INFO] 10.244.0.19:48366 - 54688 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000083263s
	[INFO] 10.244.0.19:48366 - 47418 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001158224s
	[INFO] 10.244.0.19:36523 - 46088 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001464432s
	[INFO] 10.244.0.19:48366 - 3704 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001800547s
	[INFO] 10.244.0.19:36523 - 44521 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001932498s
	[INFO] 10.244.0.19:36523 - 62021 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006573s
	[INFO] 10.244.0.19:48366 - 18649 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000035535s
	
	
	==> describe nodes <==
	Name:               addons-779469
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-779469
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=addons-779469
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T13_04_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-779469
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 13:04:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-779469
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 13:21:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 13:19:54 +0000   Mon, 07 Oct 2024 13:04:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 13:19:54 +0000   Mon, 07 Oct 2024 13:04:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 13:19:54 +0000   Mon, 07 Oct 2024 13:04:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 13:19:54 +0000   Mon, 07 Oct 2024 13:05:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-779469
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b185a85bbe647cab0ea7a44daf0565d
	  System UUID:                54196e40-8b0f-42a4-8424-aec1d4cf9b79
	  Boot ID:                    aa802e8e-7a27-4e80-bbf6-ed0c45666ec2
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-nkjm2           0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 coredns-7c65d6cfc9-kfrdl                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 etcd-addons-779469                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         16m
	  kube-system                 kindnet-7g5zx                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-addons-779469               250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-779469      200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-6ncrf                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-779469               100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-84c5f94fbc-zhbq5            100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         16m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  local-path-storage          local-path-provisioner-86d989889c-rrhx5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 16m   kube-proxy       
	  Normal   Starting                 16m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  16m   kubelet          Node addons-779469 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m   kubelet          Node addons-779469 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m   kubelet          Node addons-779469 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m   node-controller  Node addons-779469 event: Registered Node addons-779469 in Controller
	  Normal   NodeReady                16m   kubelet          Node addons-779469 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [c0d2a0e8c63b67ca492592c777ee7d42b1b776c3c05465b4b9476124bf50f596] <==
	{"level":"info","ts":"2024-10-07T13:04:38.943741Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-07T13:04:51.260957Z","caller":"traceutil/trace.go:171","msg":"trace[1187122494] transaction","detail":"{read_only:false; response_revision:354; number_of_response:1; }","duration":"153.946896ms","start":"2024-10-07T13:04:51.106987Z","end":"2024-10-07T13:04:51.260934Z","steps":["trace[1187122494] 'process raft request'  (duration: 98.213628ms)","trace[1187122494] 'compare'  (duration: 54.408645ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T13:04:53.405087Z","caller":"traceutil/trace.go:171","msg":"trace[1322881476] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"116.257936ms","start":"2024-10-07T13:04:53.288813Z","end":"2024-10-07T13:04:53.405071Z","steps":["trace[1322881476] 'process raft request'  (duration: 115.890716ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:04:53.540216Z","caller":"traceutil/trace.go:171","msg":"trace[1909237663] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"116.726373ms","start":"2024-10-07T13:04:53.423472Z","end":"2024-10-07T13:04:53.540199Z","steps":["trace[1909237663] 'process raft request'  (duration: 100.233669ms)","trace[1909237663] 'compare'  (duration: 15.994262ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T13:04:53.540450Z","caller":"traceutil/trace.go:171","msg":"trace[1465488526] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"116.88906ms","start":"2024-10-07T13:04:53.423554Z","end":"2024-10-07T13:04:53.540443Z","steps":["trace[1465488526] 'process raft request'  (duration: 116.239467ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:04:53.706090Z","caller":"traceutil/trace.go:171","msg":"trace[1880002528] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"105.141531ms","start":"2024-10-07T13:04:53.600724Z","end":"2024-10-07T13:04:53.705866Z","steps":["trace[1880002528] 'process raft request'  (duration: 45.286156ms)","trace[1880002528] 'compare'  (duration: 59.685221ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T13:04:54.094112Z","caller":"traceutil/trace.go:171","msg":"trace[1510152245] linearizableReadLoop","detail":"{readStateIndex:387; appliedIndex:384; }","duration":"100.181173ms","start":"2024-10-07T13:04:53.993917Z","end":"2024-10-07T13:04:54.094098Z","steps":["trace[1510152245] 'read index received'  (duration: 132.124µs)","trace[1510152245] 'applied index is now lower than readState.Index'  (duration: 100.024713ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T13:04:54.094431Z","caller":"traceutil/trace.go:171","msg":"trace[1936988130] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"142.348415ms","start":"2024-10-07T13:04:53.952044Z","end":"2024-10-07T13:04:54.094393Z","steps":["trace[1936988130] 'process raft request'  (duration: 107.914219ms)","trace[1936988130] 'compare'  (duration: 33.993976ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T13:04:54.094723Z","caller":"traceutil/trace.go:171","msg":"trace[859127559] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"100.979359ms","start":"2024-10-07T13:04:53.993737Z","end":"2024-10-07T13:04:54.094716Z","steps":["trace[859127559] 'process raft request'  (duration: 100.306397ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:04:54.094980Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.046483ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-779469\" ","response":"range_response_count:1 size:5745"}
	{"level":"info","ts":"2024-10-07T13:04:54.095091Z","caller":"traceutil/trace.go:171","msg":"trace[1057137103] range","detail":"{range_begin:/registry/minions/addons-779469; range_end:; response_count:1; response_revision:374; }","duration":"101.169574ms","start":"2024-10-07T13:04:53.993913Z","end":"2024-10-07T13:04:54.095083Z","steps":["trace[1057137103] 'agreement among raft nodes before linearized reading'  (duration: 100.974739ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:04:54.935903Z","caller":"traceutil/trace.go:171","msg":"trace[2134927529] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"129.506183ms","start":"2024-10-07T13:04:54.806374Z","end":"2024-10-07T13:04:54.935880Z","steps":["trace[2134927529] 'process raft request'  (duration: 129.385849ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:04:54.936569Z","caller":"traceutil/trace.go:171","msg":"trace[940512204] linearizableReadLoop","detail":"{readStateIndex:404; appliedIndex:404; }","duration":"127.00876ms","start":"2024-10-07T13:04:54.809548Z","end":"2024-10-07T13:04:54.936557Z","steps":["trace[940512204] 'read index received'  (duration: 127.00259ms)","trace[940512204] 'applied index is now lower than readState.Index'  (duration: 4.694µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-07T13:04:54.963043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.474773ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-addons-779469\" ","response":"range_response_count:1 size:4488"}
	{"level":"info","ts":"2024-10-07T13:04:54.963107Z","caller":"traceutil/trace.go:171","msg":"trace[585043007] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-addons-779469; range_end:; response_count:1; response_revision:389; }","duration":"153.550275ms","start":"2024-10-07T13:04:54.809542Z","end":"2024-10-07T13:04:54.963093Z","steps":["trace[585043007] 'agreement among raft nodes before linearized reading'  (duration: 127.249912ms)","trace[585043007] 'range keys from in-memory index tree'  (duration: 26.193469ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T13:04:54.968215Z","caller":"traceutil/trace.go:171","msg":"trace[936267434] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"149.443043ms","start":"2024-10-07T13:04:54.818751Z","end":"2024-10-07T13:04:54.968194Z","steps":["trace[936267434] 'process raft request'  (duration: 139.955985ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:04:54.981989Z","caller":"traceutil/trace.go:171","msg":"trace[249121898] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"153.922862ms","start":"2024-10-07T13:04:54.828050Z","end":"2024-10-07T13:04:54.981973Z","steps":["trace[249121898] 'process raft request'  (duration: 140.097332ms)","trace[249121898] 'compare'  (duration: 13.727842ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-07T13:04:54.999564Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.302993ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T13:04:54.999727Z","caller":"traceutil/trace.go:171","msg":"trace[1621784510] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:399; }","duration":"180.45997ms","start":"2024-10-07T13:04:54.819239Z","end":"2024-10-07T13:04:54.999699Z","steps":["trace[1621784510] 'agreement among raft nodes before linearized reading'  (duration: 180.265127ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:14:40.182383Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1494}
	{"level":"info","ts":"2024-10-07T13:14:40.217210Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1494,"took":"34.266919ms","hash":3207175570,"current-db-size-bytes":5992448,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":3166208,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2024-10-07T13:14:40.217265Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3207175570,"revision":1494,"compact-revision":-1}
	{"level":"info","ts":"2024-10-07T13:19:40.190281Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1910}
	{"level":"info","ts":"2024-10-07T13:19:40.209310Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1910,"took":"18.260329ms","hash":2351198326,"current-db-size-bytes":5992448,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":4374528,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2024-10-07T13:19:40.209369Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2351198326,"revision":1910,"compact-revision":1494}
	
	
	==> kernel <==
	 13:21:36 up 1 day,  3:04,  0 users,  load average: 0.08, 0.56, 1.36
	Linux addons-779469 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [f5c08bdd4964441223fff82d3b0012b2a7fa6a6825d99579fc6a72e464169ffd] <==
	I1007 13:19:30.806551       1 main.go:299] handling current node
	I1007 13:19:40.800957       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:19:40.800996       1 main.go:299] handling current node
	I1007 13:19:50.799484       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:19:50.799518       1 main.go:299] handling current node
	I1007 13:20:00.799260       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:20:00.799308       1 main.go:299] handling current node
	I1007 13:20:10.798705       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:20:10.798778       1 main.go:299] handling current node
	I1007 13:20:20.800013       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:20:20.800138       1 main.go:299] handling current node
	I1007 13:20:30.801365       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:20:30.801401       1 main.go:299] handling current node
	I1007 13:20:40.799592       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:20:40.799702       1 main.go:299] handling current node
	I1007 13:20:50.799044       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:20:50.799079       1 main.go:299] handling current node
	I1007 13:21:00.798640       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:21:00.798674       1 main.go:299] handling current node
	I1007 13:21:10.798661       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:21:10.798700       1 main.go:299] handling current node
	I1007 13:21:20.807612       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:21:20.807730       1 main.go:299] handling current node
	I1007 13:21:30.803817       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:21:30.803857       1 main.go:299] handling current node
	
	
	==> kube-apiserver [b8cf421e0e643dfa9bfa5cb22c5f3d75f87be9b13fd964344fae94afc80d820b] <==
	 > logger="UnhandledError"
	I1007 13:07:01.553470       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1007 13:15:26.424354       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.241.1"}
	E1007 13:15:41.568056       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1007 13:15:54.347732       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1007 13:15:57.808470       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1007 13:16:30.579213       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1007 13:16:57.742849       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 13:16:57.742981       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 13:16:57.799184       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 13:16:57.800053       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 13:16:57.898292       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 13:16:57.898339       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 13:16:57.903196       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 13:16:57.903329       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 13:16:57.930205       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 13:16:57.930254       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1007 13:16:58.898588       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1007 13:16:58.930693       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1007 13:16:59.047840       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1007 13:17:11.585912       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1007 13:17:12.636729       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1007 13:17:17.183142       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1007 13:17:17.472221       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.109.87"}
	I1007 13:19:37.636077       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.138.164"}
	
	
	==> kube-controller-manager [e48b3531357e89a9042a2166371a38e27c52bcc2c005128a78f8d85912a5a34d] <==
	W1007 13:19:40.514995       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 13:19:40.515038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 13:19:42.818679       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I1007 13:19:42.825490       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I1007 13:19:42.830394       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="7.762µs"
	W1007 13:19:50.710140       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 13:19:50.710181       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 13:19:52.916148       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I1007 13:19:54.982675       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-779469"
	W1007 13:20:10.429660       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 13:20:10.429702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 13:20:17.370025       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 13:20:17.370069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 13:20:35.613841       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 13:20:35.613883       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 13:20:43.160937       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 13:20:43.160983       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 13:21:04.981322       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 13:21:04.981364       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 13:21:06.746972       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 13:21:06.747119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 13:21:07.668479       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 13:21:07.668523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 13:21:22.050743       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 13:21:22.050791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [24b2cc84e135fc28cb27fbd92aed31f80e1f521a3cb5b5b037e09d971dbfa680] <==
	I1007 13:04:50.447774       1 server_linux.go:66] "Using iptables proxy"
	I1007 13:04:50.560966       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1007 13:04:50.561114       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 13:04:50.646955       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1007 13:04:50.647095       1 server_linux.go:169] "Using iptables Proxier"
	I1007 13:04:50.651856       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 13:04:50.652510       1 server.go:483] "Version info" version="v1.31.1"
	I1007 13:04:50.652585       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 13:04:50.669284       1 config.go:199] "Starting service config controller"
	I1007 13:04:50.669861       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 13:04:50.669944       1 config.go:105] "Starting endpoint slice config controller"
	I1007 13:04:50.669982       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 13:04:50.670583       1 config.go:328] "Starting node config controller"
	I1007 13:04:50.672000       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 13:04:50.772635       1 shared_informer.go:320] Caches are synced for node config
	I1007 13:04:50.779828       1 shared_informer.go:320] Caches are synced for service config
	I1007 13:04:50.779911       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2e2a39495c277f632c42e48741c60a17c0c7c343a40001112797a4a47ce801fa] <==
	W1007 13:04:43.027679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1007 13:04:43.027794       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.027913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 13:04:43.027962       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.028079       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 13:04:43.028123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.028214       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 13:04:43.028254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.028421       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 13:04:43.028468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.028559       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 13:04:43.028642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.028609       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 13:04:43.028796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.030342       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 13:04:43.030377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.030346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 13:04:43.030402       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.030466       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 13:04:43.030514       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.030529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 13:04:43.030624       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:04:43.030476       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 13:04:43.030655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1007 13:04:44.321571       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 13:19:46 addons-779469 kubelet[1493]: I1007 13:19:46.814659    1493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="664248ba-5384-463e-98d4-c3151a40f7db" path="/var/lib/kubelet/pods/664248ba-5384-463e-98d4-c3151a40f7db/volumes"
	Oct 07 13:19:55 addons-779469 kubelet[1493]: E1007 13:19:55.160945    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307195160665628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:19:55 addons-779469 kubelet[1493]: E1007 13:19:55.160984    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307195160665628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:20:05 addons-779469 kubelet[1493]: E1007 13:20:05.163496    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307205163180064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:20:05 addons-779469 kubelet[1493]: E1007 13:20:05.163570    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307205163180064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:20:15 addons-779469 kubelet[1493]: E1007 13:20:15.166702    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307215166422495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:20:15 addons-779469 kubelet[1493]: E1007 13:20:15.166749    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307215166422495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:20:19 addons-779469 kubelet[1493]: I1007 13:20:19.812988    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 13:20:25 addons-779469 kubelet[1493]: E1007 13:20:25.169656    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307225169334888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:20:25 addons-779469 kubelet[1493]: E1007 13:20:25.169702    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307225169334888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:20:35 addons-779469 kubelet[1493]: E1007 13:20:35.172472    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307235172231043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:20:35 addons-779469 kubelet[1493]: E1007 13:20:35.172516    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307235172231043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:20:45 addons-779469 kubelet[1493]: E1007 13:20:45.175425    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307245175120840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:20:45 addons-779469 kubelet[1493]: E1007 13:20:45.175474    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307245175120840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:20:55 addons-779469 kubelet[1493]: E1007 13:20:55.178532    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307255178260627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:20:55 addons-779469 kubelet[1493]: E1007 13:20:55.178575    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307255178260627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:21:05 addons-779469 kubelet[1493]: E1007 13:21:05.181450    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307265181210015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:21:05 addons-779469 kubelet[1493]: E1007 13:21:05.181497    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307265181210015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:21:15 addons-779469 kubelet[1493]: E1007 13:21:15.185548    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307275184366983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:21:15 addons-779469 kubelet[1493]: E1007 13:21:15.185593    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307275184366983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:21:25 addons-779469 kubelet[1493]: E1007 13:21:25.188614    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307285188380906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:21:25 addons-779469 kubelet[1493]: E1007 13:21:25.188651    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307285188380906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:21:25 addons-779469 kubelet[1493]: I1007 13:21:25.812982    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 13:21:35 addons-779469 kubelet[1493]: E1007 13:21:35.191827    1493 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307295191573646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:21:35 addons-779469 kubelet[1493]: E1007 13:21:35.191865    1493 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728307295191573646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [457b9e07e729d3ad0810718988e98d201b8b41ad16425a6d14268f34d6e00015] <==
	I1007 13:05:32.311284       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 13:05:32.329362       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 13:05:32.330915       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 13:05:32.346166       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 13:05:32.350005       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a27e81e1-200b-4e26-81a3-ca764a02c265", APIVersion:"v1", ResourceVersion:"886", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-779469_1abdc3ad-4d93-43cd-9a9c-3999da9bd98b became leader
	I1007 13:05:32.350254       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-779469_1abdc3ad-4d93-43cd-9a9c-3999da9bd98b!
	I1007 13:05:32.455090       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-779469_1abdc3ad-4d93-43cd-9a9c-3999da9bd98b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-779469 -n addons-779469
helpers_test.go:261: (dbg) Run:  kubectl --context addons-779469 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-779469 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (340.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (128.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-362969 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1007 13:37:13.830802 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-362969 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m3.366413612s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:591: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-362969       NotReady   control-plane   11m     v1.31.1
	ha-362969-m02   Ready      control-plane   11m     v1.31.1
	ha-362969-m04   Ready      <none>          8m52s   v1.31.1

                                                
                                                
-- /stdout --
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:599: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-362969
helpers_test.go:235: (dbg) docker inspect ha-362969:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4808aca2e7dbdef64b564ae3cdc7d364bfdf2ea6a7fa88618716805ca19bddb",
	        "Created": "2024-10-07T13:25:54.625726911Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1760505,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-07T13:35:55.751238813Z",
	            "FinishedAt": "2024-10-07T13:35:54.948393231Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/c4808aca2e7dbdef64b564ae3cdc7d364bfdf2ea6a7fa88618716805ca19bddb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4808aca2e7dbdef64b564ae3cdc7d364bfdf2ea6a7fa88618716805ca19bddb/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4808aca2e7dbdef64b564ae3cdc7d364bfdf2ea6a7fa88618716805ca19bddb/hosts",
	        "LogPath": "/var/lib/docker/containers/c4808aca2e7dbdef64b564ae3cdc7d364bfdf2ea6a7fa88618716805ca19bddb/c4808aca2e7dbdef64b564ae3cdc7d364bfdf2ea6a7fa88618716805ca19bddb-json.log",
	        "Name": "/ha-362969",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-362969:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-362969",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8f5badce940c8f5fb20b8121e132d79f9544d95f7caa99ea6857012b849bc7ed-init/diff:/var/lib/docker/overlay2/ba883e93760810ee908affcdb026e83ee6095990c52f4c87c201773cc7ffeb3e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8f5badce940c8f5fb20b8121e132d79f9544d95f7caa99ea6857012b849bc7ed/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8f5badce940c8f5fb20b8121e132d79f9544d95f7caa99ea6857012b849bc7ed/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8f5badce940c8f5fb20b8121e132d79f9544d95f7caa99ea6857012b849bc7ed/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-362969",
	                "Source": "/var/lib/docker/volumes/ha-362969/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-362969",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-362969",
	                "name.minikube.sigs.k8s.io": "ha-362969",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9efb88ca6c41191d2f811273e2f8d91b4c862ee46fb1b808be0b933b8b293d92",
	            "SandboxKey": "/var/run/docker/netns/9efb88ca6c41",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38326"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38327"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38330"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38328"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38329"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-362969": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0ea298a5d452044ff4b878f84f21cc1ae3de827cbb9e6b5920d3c3d46009e4e1",
	                    "EndpointID": "8e8135c162593d9b0dd1c46302b724ca61e6f0388cf9e9314b399db2ba33e2bb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-362969",
	                        "c4808aca2e7d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-362969 -n ha-362969
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ha-362969 logs -n 25: (2.201438231s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-362969 cp ha-362969-m03:/home/docker/cp-test.txt                              | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | ha-362969-m04:/home/docker/cp-test_ha-362969-m03_ha-362969-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-362969 ssh -n                                                                 | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | ha-362969-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-362969 ssh -n ha-362969-m04 sudo cat                                          | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | /home/docker/cp-test_ha-362969-m03_ha-362969-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-362969 cp testdata/cp-test.txt                                                | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | ha-362969-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-362969 ssh -n                                                                 | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | ha-362969-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-362969 cp ha-362969-m04:/home/docker/cp-test.txt                              | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1546648790/001/cp-test_ha-362969-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-362969 ssh -n                                                                 | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | ha-362969-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-362969 cp ha-362969-m04:/home/docker/cp-test.txt                              | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | ha-362969:/home/docker/cp-test_ha-362969-m04_ha-362969.txt                       |           |         |         |                     |                     |
	| ssh     | ha-362969 ssh -n                                                                 | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | ha-362969-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-362969 ssh -n ha-362969 sudo cat                                              | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | /home/docker/cp-test_ha-362969-m04_ha-362969.txt                                 |           |         |         |                     |                     |
	| cp      | ha-362969 cp ha-362969-m04:/home/docker/cp-test.txt                              | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | ha-362969-m02:/home/docker/cp-test_ha-362969-m04_ha-362969-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-362969 ssh -n                                                                 | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | ha-362969-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-362969 ssh -n ha-362969-m02 sudo cat                                          | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | /home/docker/cp-test_ha-362969-m04_ha-362969-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-362969 cp ha-362969-m04:/home/docker/cp-test.txt                              | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | ha-362969-m03:/home/docker/cp-test_ha-362969-m04_ha-362969-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-362969 ssh -n                                                                 | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | ha-362969-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-362969 ssh -n ha-362969-m03 sudo cat                                          | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | /home/docker/cp-test_ha-362969-m04_ha-362969-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-362969 node stop m02 -v=7                                                     | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-362969 node start m02 -v=7                                                    | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:30 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-362969 -v=7                                                           | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-362969 -v=7                                                                | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:30 UTC | 07 Oct 24 13:31 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-362969 --wait=true -v=7                                                    | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:31 UTC | 07 Oct 24 13:35 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-362969                                                                | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:35 UTC |                     |
	| node    | ha-362969 node delete m03 -v=7                                                   | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:35 UTC | 07 Oct 24 13:35 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-362969 stop -v=7                                                              | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:35 UTC | 07 Oct 24 13:35 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-362969 --wait=true                                                         | ha-362969 | jenkins | v1.34.0 | 07 Oct 24 13:35 UTC | 07 Oct 24 13:37 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 13:35:55
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 13:35:55.417758 1760312 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:35:55.417883 1760312 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:35:55.417894 1760312 out.go:358] Setting ErrFile to fd 2...
	I1007 13:35:55.417899 1760312 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:35:55.418167 1760312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-1688750/.minikube/bin
	I1007 13:35:55.418536 1760312 out.go:352] Setting JSON to false
	I1007 13:35:55.419386 1760312 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":98307,"bootTime":1728209849,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1007 13:35:55.419457 1760312 start.go:139] virtualization:  
	I1007 13:35:55.422776 1760312 out.go:177] * [ha-362969] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 13:35:55.426222 1760312 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:35:55.426280 1760312 notify.go:220] Checking for updates...
	I1007 13:35:55.431794 1760312 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:35:55.434341 1760312 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-1688750/kubeconfig
	I1007 13:35:55.436794 1760312 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-1688750/.minikube
	I1007 13:35:55.439284 1760312 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 13:35:55.441726 1760312 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:35:55.444675 1760312 config.go:182] Loaded profile config "ha-362969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:35:55.445212 1760312 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:35:55.466109 1760312 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 13:35:55.466228 1760312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:35:55.524406 1760312 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:41 SystemTime:2024-10-07 13:35:55.5149809 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:35:55.524513 1760312 docker.go:318] overlay module found
	I1007 13:35:55.528942 1760312 out.go:177] * Using the docker driver based on existing profile
	I1007 13:35:55.531384 1760312 start.go:297] selected driver: docker
	I1007 13:35:55.531400 1760312 start.go:901] validating driver "docker" against &{Name:ha-362969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-362969 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logvi
ewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:35:55.531608 1760312 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:35:55.531718 1760312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:35:55.581780 1760312 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:41 SystemTime:2024-10-07 13:35:55.572053996 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:35:55.582242 1760312 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:35:55.582271 1760312 cni.go:84] Creating CNI manager for ""
	I1007 13:35:55.582308 1760312 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1007 13:35:55.582364 1760312 start.go:340] cluster config:
	{Name:ha-362969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-362969 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvi
dia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:35:55.585224 1760312 out.go:177] * Starting "ha-362969" primary control-plane node in "ha-362969" cluster
	I1007 13:35:55.587818 1760312 cache.go:121] Beginning downloading kic base image for docker with crio
	I1007 13:35:55.590373 1760312 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 13:35:55.593149 1760312 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:35:55.593204 1760312 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-1688750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1007 13:35:55.593223 1760312 cache.go:56] Caching tarball of preloaded images
	I1007 13:35:55.593223 1760312 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 13:35:55.593305 1760312 preload.go:172] Found /home/jenkins/minikube-integration/18424-1688750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1007 13:35:55.593314 1760312 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 13:35:55.593454 1760312 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/config.json ...
	I1007 13:35:55.611767 1760312 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1007 13:35:55.611792 1760312 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1007 13:35:55.611808 1760312 cache.go:194] Successfully downloaded all kic artifacts
	I1007 13:35:55.611831 1760312 start.go:360] acquireMachinesLock for ha-362969: {Name:mk519f4416bee6db15e3331a55a9679f144b7ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:35:55.611890 1760312 start.go:364] duration metric: took 38.39µs to acquireMachinesLock for "ha-362969"
	I1007 13:35:55.611923 1760312 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:35:55.611939 1760312 fix.go:54] fixHost starting: 
	I1007 13:35:55.612221 1760312 cli_runner.go:164] Run: docker container inspect ha-362969 --format={{.State.Status}}
	I1007 13:35:55.628073 1760312 fix.go:112] recreateIfNeeded on ha-362969: state=Stopped err=<nil>
	W1007 13:35:55.628112 1760312 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:35:55.630939 1760312 out.go:177] * Restarting existing docker container for "ha-362969" ...
	I1007 13:35:55.633533 1760312 cli_runner.go:164] Run: docker start ha-362969
	I1007 13:35:55.914243 1760312 cli_runner.go:164] Run: docker container inspect ha-362969 --format={{.State.Status}}
	I1007 13:35:55.941091 1760312 kic.go:430] container "ha-362969" state is running.
	I1007 13:35:55.941854 1760312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-362969
	I1007 13:35:55.964884 1760312 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/config.json ...
	I1007 13:35:55.965122 1760312 machine.go:93] provisionDockerMachine start ...
	I1007 13:35:55.965194 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969
	I1007 13:35:55.989339 1760312 main.go:141] libmachine: Using SSH client type: native
	I1007 13:35:55.989602 1760312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38326 <nil> <nil>}
	I1007 13:35:55.989621 1760312 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:35:55.990246 1760312 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1007 13:35:59.126969 1760312 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-362969
	
	I1007 13:35:59.126995 1760312 ubuntu.go:169] provisioning hostname "ha-362969"
	I1007 13:35:59.127061 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969
	I1007 13:35:59.144597 1760312 main.go:141] libmachine: Using SSH client type: native
	I1007 13:35:59.144847 1760312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38326 <nil> <nil>}
	I1007 13:35:59.144869 1760312 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-362969 && echo "ha-362969" | sudo tee /etc/hostname
	I1007 13:35:59.295177 1760312 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-362969
	
	I1007 13:35:59.295258 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969
	I1007 13:35:59.313240 1760312 main.go:141] libmachine: Using SSH client type: native
	I1007 13:35:59.313516 1760312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38326 <nil> <nil>}
	I1007 13:35:59.313541 1760312 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-362969' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-362969/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-362969' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:35:59.447613 1760312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:35:59.447704 1760312 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18424-1688750/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-1688750/.minikube}
	I1007 13:35:59.447784 1760312 ubuntu.go:177] setting up certificates
	I1007 13:35:59.447811 1760312 provision.go:84] configureAuth start
	I1007 13:35:59.447914 1760312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-362969
	I1007 13:35:59.464727 1760312 provision.go:143] copyHostCerts
	I1007 13:35:59.464772 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18424-1688750/.minikube/cert.pem
	I1007 13:35:59.464805 1760312 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-1688750/.minikube/cert.pem, removing ...
	I1007 13:35:59.464824 1760312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-1688750/.minikube/cert.pem
	I1007 13:35:59.464901 1760312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-1688750/.minikube/cert.pem (1123 bytes)
	I1007 13:35:59.464996 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18424-1688750/.minikube/key.pem
	I1007 13:35:59.465018 1760312 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-1688750/.minikube/key.pem, removing ...
	I1007 13:35:59.465023 1760312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-1688750/.minikube/key.pem
	I1007 13:35:59.465061 1760312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-1688750/.minikube/key.pem (1679 bytes)
	I1007 13:35:59.465116 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.pem
	I1007 13:35:59.465143 1760312 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.pem, removing ...
	I1007 13:35:59.465150 1760312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.pem
	I1007 13:35:59.465176 1760312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.pem (1078 bytes)
	I1007 13:35:59.465236 1760312 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca-key.pem org=jenkins.ha-362969 san=[127.0.0.1 192.168.49.2 ha-362969 localhost minikube]
	I1007 13:35:59.722212 1760312 provision.go:177] copyRemoteCerts
	I1007 13:35:59.722297 1760312 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:35:59.722350 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969
	I1007 13:35:59.740964 1760312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38326 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/ha-362969/id_rsa Username:docker}
	I1007 13:35:59.836224 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 13:35:59.836284 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1007 13:35:59.860522 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 13:35:59.860583 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 13:35:59.884787 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 13:35:59.884848 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1007 13:35:59.908068 1760312 provision.go:87] duration metric: took 460.22923ms to configureAuth
	I1007 13:35:59.908095 1760312 ubuntu.go:193] setting minikube options for container-runtime
	I1007 13:35:59.908320 1760312 config.go:182] Loaded profile config "ha-362969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:35:59.908415 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969
	I1007 13:35:59.925337 1760312 main.go:141] libmachine: Using SSH client type: native
	I1007 13:35:59.925622 1760312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38326 <nil> <nil>}
	I1007 13:35:59.925643 1760312 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:36:00.580958 1760312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:36:00.581045 1760312 machine.go:96] duration metric: took 4.615905575s to provisionDockerMachine
	I1007 13:36:00.581072 1760312 start.go:293] postStartSetup for "ha-362969" (driver="docker")
	I1007 13:36:00.581095 1760312 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:36:00.581218 1760312 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:36:00.581280 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969
	I1007 13:36:00.602075 1760312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38326 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/ha-362969/id_rsa Username:docker}
	I1007 13:36:00.700735 1760312 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:36:00.703705 1760312 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 13:36:00.703746 1760312 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 13:36:00.703758 1760312 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 13:36:00.703769 1760312 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 13:36:00.703786 1760312 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-1688750/.minikube/addons for local assets ...
	I1007 13:36:00.703840 1760312 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-1688750/.minikube/files for local assets ...
	I1007 13:36:00.703922 1760312 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-1688750/.minikube/files/etc/ssl/certs/16941262.pem -> 16941262.pem in /etc/ssl/certs
	I1007 13:36:00.703933 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/files/etc/ssl/certs/16941262.pem -> /etc/ssl/certs/16941262.pem
	I1007 13:36:00.704045 1760312 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:36:00.712504 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/files/etc/ssl/certs/16941262.pem --> /etc/ssl/certs/16941262.pem (1708 bytes)
	I1007 13:36:00.737320 1760312 start.go:296] duration metric: took 156.223008ms for postStartSetup
	I1007 13:36:00.737402 1760312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 13:36:00.737460 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969
	I1007 13:36:00.754036 1760312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38326 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/ha-362969/id_rsa Username:docker}
	I1007 13:36:00.844384 1760312 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 13:36:00.849044 1760312 fix.go:56] duration metric: took 5.237104322s for fixHost
	I1007 13:36:00.849071 1760312 start.go:83] releasing machines lock for "ha-362969", held for 5.237167114s
	I1007 13:36:00.849138 1760312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-362969
	I1007 13:36:00.865152 1760312 ssh_runner.go:195] Run: cat /version.json
	I1007 13:36:00.865197 1760312 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:36:00.865206 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969
	I1007 13:36:00.865256 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969
	I1007 13:36:00.886080 1760312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38326 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/ha-362969/id_rsa Username:docker}
	I1007 13:36:00.900261 1760312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38326 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/ha-362969/id_rsa Username:docker}
	I1007 13:36:00.979014 1760312 ssh_runner.go:195] Run: systemctl --version
	I1007 13:36:01.121000 1760312 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:36:01.267014 1760312 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 13:36:01.271231 1760312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:36:01.280227 1760312 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1007 13:36:01.280310 1760312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:36:01.289515 1760312 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 13:36:01.289543 1760312 start.go:495] detecting cgroup driver to use...
	I1007 13:36:01.289577 1760312 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 13:36:01.289637 1760312 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:36:01.302168 1760312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:36:01.314143 1760312 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:36:01.314212 1760312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:36:01.327739 1760312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:36:01.340094 1760312 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:36:01.428818 1760312 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:36:01.519826 1760312 docker.go:233] disabling docker service ...
	I1007 13:36:01.519944 1760312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:36:01.534702 1760312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:36:01.547496 1760312 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:36:01.637014 1760312 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:36:01.726751 1760312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:36:01.740345 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:36:01.760169 1760312 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:36:01.760240 1760312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:36:01.774345 1760312 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:36:01.774420 1760312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:36:01.786007 1760312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:36:01.796829 1760312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:36:01.807701 1760312 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:36:01.817923 1760312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:36:01.829054 1760312 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:36:01.840919 1760312 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:36:01.853070 1760312 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:36:01.862659 1760312 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:36:01.872400 1760312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:36:01.957762 1760312 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:36:02.091035 1760312 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:36:02.091145 1760312 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:36:02.095288 1760312 start.go:563] Will wait 60s for crictl version
	I1007 13:36:02.095407 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:36:02.099051 1760312 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:36:02.149296 1760312 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1007 13:36:02.149412 1760312 ssh_runner.go:195] Run: crio --version
	I1007 13:36:02.190870 1760312 ssh_runner.go:195] Run: crio --version
	I1007 13:36:02.232315 1760312 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1007 13:36:02.234764 1760312 cli_runner.go:164] Run: docker network inspect ha-362969 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 13:36:02.249258 1760312 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1007 13:36:02.253094 1760312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:36:02.264206 1760312 kubeadm.go:883] updating cluster {Name:ha-362969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-362969 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false me
tallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:36:02.264378 1760312 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:36:02.264449 1760312 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:36:02.310020 1760312 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:36:02.310045 1760312 crio.go:433] Images already preloaded, skipping extraction
	I1007 13:36:02.310129 1760312 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:36:02.351480 1760312 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:36:02.351503 1760312 cache_images.go:84] Images are preloaded, skipping loading
	I1007 13:36:02.351522 1760312 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I1007 13:36:02.351704 1760312 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-362969 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-362969 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:36:02.351823 1760312 ssh_runner.go:195] Run: crio config
	I1007 13:36:02.411659 1760312 cni.go:84] Creating CNI manager for ""
	I1007 13:36:02.411687 1760312 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1007 13:36:02.411698 1760312 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:36:02.411720 1760312 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-362969 NodeName:ha-362969 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 13:36:02.411867 1760312 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-362969"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:36:02.411887 1760312 kube-vip.go:115] generating kube-vip config ...
	I1007 13:36:02.411942 1760312 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1007 13:36:02.427198 1760312 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 13:36:02.427304 1760312 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 13:36:02.427375 1760312 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:36:02.437486 1760312 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:36:02.437610 1760312 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 13:36:02.446959 1760312 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1007 13:36:02.465356 1760312 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:36:02.483427 1760312 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I1007 13:36:02.501979 1760312 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 13:36:02.520537 1760312 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1007 13:36:02.523977 1760312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:36:02.535358 1760312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:36:02.625390 1760312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:36:02.639708 1760312 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969 for IP: 192.168.49.2
	I1007 13:36:02.639784 1760312 certs.go:194] generating shared ca certs ...
	I1007 13:36:02.639814 1760312 certs.go:226] acquiring lock for ca certs: {Name:mk3a082a64706c071bb4db632f3ec05c7c14e01c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:36:02.639997 1760312 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.key
	I1007 13:36:02.640079 1760312 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.key
	I1007 13:36:02.640114 1760312 certs.go:256] generating profile certs ...
	I1007 13:36:02.640241 1760312 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/client.key
	I1007 13:36:02.640291 1760312 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/apiserver.key.b1f6c0e9
	I1007 13:36:02.640327 1760312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/apiserver.crt.b1f6c0e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1007 13:36:02.863017 1760312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/apiserver.crt.b1f6c0e9 ...
	I1007 13:36:02.863113 1760312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/apiserver.crt.b1f6c0e9: {Name:mk00414ae534793d32aa198534401c87f890f397 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:36:02.863330 1760312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/apiserver.key.b1f6c0e9 ...
	I1007 13:36:02.863371 1760312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/apiserver.key.b1f6c0e9: {Name:mkf56ed8675df016910ff1a85cf0d02dc454d780 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:36:02.863512 1760312 certs.go:381] copying /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/apiserver.crt.b1f6c0e9 -> /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/apiserver.crt
	I1007 13:36:02.863733 1760312 certs.go:385] copying /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/apiserver.key.b1f6c0e9 -> /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/apiserver.key
	I1007 13:36:02.863921 1760312 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/proxy-client.key
	I1007 13:36:02.863959 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 13:36:02.863995 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 13:36:02.864039 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 13:36:02.864075 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 13:36:02.864105 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 13:36:02.864150 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 13:36:02.864189 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 13:36:02.864261 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 13:36:02.864334 1760312 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/1694126.pem (1338 bytes)
	W1007 13:36:02.864392 1760312 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/1694126_empty.pem, impossibly tiny 0 bytes
	I1007 13:36:02.864421 1760312 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 13:36:02.864513 1760312 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem (1078 bytes)
	I1007 13:36:02.864584 1760312 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:36:02.864642 1760312 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/key.pem (1679 bytes)
	I1007 13:36:02.864707 1760312 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/files/etc/ssl/certs/16941262.pem (1708 bytes)
	I1007 13:36:02.864770 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/files/etc/ssl/certs/16941262.pem -> /usr/share/ca-certificates/16941262.pem
	I1007 13:36:02.864803 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:36:02.864831 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/1694126.pem -> /usr/share/ca-certificates/1694126.pem
	I1007 13:36:02.865539 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:36:02.890880 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:36:02.918347 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:36:02.943922 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 13:36:02.968859 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1007 13:36:02.992946 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 13:36:03.018784 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:36:03.044625 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 13:36:03.070380 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/files/etc/ssl/certs/16941262.pem --> /usr/share/ca-certificates/16941262.pem (1708 bytes)
	I1007 13:36:03.095627 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:36:03.120289 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/1694126.pem --> /usr/share/ca-certificates/1694126.pem (1338 bytes)
	I1007 13:36:03.144581 1760312 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:36:03.162769 1760312 ssh_runner.go:195] Run: openssl version
	I1007 13:36:03.168416 1760312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:36:03.177924 1760312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:36:03.181404 1760312 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 13:04 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:36:03.181497 1760312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:36:03.188255 1760312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:36:03.197559 1760312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1694126.pem && ln -fs /usr/share/ca-certificates/1694126.pem /etc/ssl/certs/1694126.pem"
	I1007 13:36:03.206924 1760312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1694126.pem
	I1007 13:36:03.210424 1760312 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 13:22 /usr/share/ca-certificates/1694126.pem
	I1007 13:36:03.210491 1760312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1694126.pem
	I1007 13:36:03.217450 1760312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1694126.pem /etc/ssl/certs/51391683.0"
	I1007 13:36:03.226710 1760312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16941262.pem && ln -fs /usr/share/ca-certificates/16941262.pem /etc/ssl/certs/16941262.pem"
	I1007 13:36:03.236299 1760312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16941262.pem
	I1007 13:36:03.239985 1760312 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 13:22 /usr/share/ca-certificates/16941262.pem
	I1007 13:36:03.240053 1760312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16941262.pem
	I1007 13:36:03.247171 1760312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16941262.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:36:03.256576 1760312 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:36:03.260163 1760312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 13:36:03.267067 1760312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 13:36:03.274201 1760312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 13:36:03.281158 1760312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 13:36:03.288182 1760312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 13:36:03.295116 1760312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 13:36:03.302003 1760312 kubeadm.go:392] StartCluster: {Name:ha-362969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-362969 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metal
lb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:36:03.302140 1760312 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:36:03.302202 1760312 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:36:03.342976 1760312 cri.go:89] found id: ""
	I1007 13:36:03.343058 1760312 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:36:03.352347 1760312 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 13:36:03.352410 1760312 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 13:36:03.352497 1760312 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 13:36:03.360915 1760312 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 13:36:03.361388 1760312 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-362969" does not appear in /home/jenkins/minikube-integration/18424-1688750/kubeconfig
	I1007 13:36:03.361500 1760312 kubeconfig.go:62] /home/jenkins/minikube-integration/18424-1688750/kubeconfig needs updating (will repair): [kubeconfig missing "ha-362969" cluster setting kubeconfig missing "ha-362969" context setting]
	I1007 13:36:03.361762 1760312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/kubeconfig: {Name:mkae782d6e0841d1e777fb7cb23057f0dd940052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:36:03.362198 1760312 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18424-1688750/kubeconfig
	I1007 13:36:03.362447 1760312 kapi.go:59] client config for ha-362969: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/client.crt", KeyFile:"/home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/client.key", CAFile:"/home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e94a20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 13:36:03.362939 1760312 cert_rotation.go:140] Starting client certificate rotation controller
	I1007 13:36:03.363135 1760312 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 13:36:03.371874 1760312 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I1007 13:36:03.371939 1760312 kubeadm.go:597] duration metric: took 19.514396ms to restartPrimaryControlPlane
	I1007 13:36:03.371956 1760312 kubeadm.go:394] duration metric: took 69.960186ms to StartCluster
	I1007 13:36:03.371973 1760312 settings.go:142] acquiring lock: {Name:mkc4eef6ec2cbdb287b7d49da88f957f9ede0465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:36:03.372047 1760312 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-1688750/kubeconfig
	I1007 13:36:03.372623 1760312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/kubeconfig: {Name:mkae782d6e0841d1e777fb7cb23057f0dd940052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:36:03.372829 1760312 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:36:03.372857 1760312 start.go:241] waiting for startup goroutines ...
	I1007 13:36:03.372872 1760312 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:36:03.373358 1760312 config.go:182] Loaded profile config "ha-362969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:36:03.376896 1760312 out.go:177] * Enabled addons: 
	I1007 13:36:03.379090 1760312 addons.go:510] duration metric: took 6.216921ms for enable addons: enabled=[]
	I1007 13:36:03.379148 1760312 start.go:246] waiting for cluster config update ...
	I1007 13:36:03.379163 1760312 start.go:255] writing updated cluster config ...
	I1007 13:36:03.382030 1760312 out.go:201] 
	I1007 13:36:03.384850 1760312 config.go:182] Loaded profile config "ha-362969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:36:03.385036 1760312 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/config.json ...
	I1007 13:36:03.388012 1760312 out.go:177] * Starting "ha-362969-m02" control-plane node in "ha-362969" cluster
	I1007 13:36:03.390372 1760312 cache.go:121] Beginning downloading kic base image for docker with crio
	I1007 13:36:03.393020 1760312 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 13:36:03.395487 1760312 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:36:03.395540 1760312 cache.go:56] Caching tarball of preloaded images
	I1007 13:36:03.395585 1760312 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 13:36:03.395660 1760312 preload.go:172] Found /home/jenkins/minikube-integration/18424-1688750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1007 13:36:03.395671 1760312 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 13:36:03.395791 1760312 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/config.json ...
	I1007 13:36:03.414026 1760312 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1007 13:36:03.414047 1760312 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1007 13:36:03.414062 1760312 cache.go:194] Successfully downloaded all kic artifacts
	I1007 13:36:03.414087 1760312 start.go:360] acquireMachinesLock for ha-362969-m02: {Name:mkb308390cfa2876f2d723eea2a419bfb42e6264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:36:03.414156 1760312 start.go:364] duration metric: took 46.851µs to acquireMachinesLock for "ha-362969-m02"
	I1007 13:36:03.414178 1760312 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:36:03.414188 1760312 fix.go:54] fixHost starting: m02
	I1007 13:36:03.414452 1760312 cli_runner.go:164] Run: docker container inspect ha-362969-m02 --format={{.State.Status}}
	I1007 13:36:03.430720 1760312 fix.go:112] recreateIfNeeded on ha-362969-m02: state=Stopped err=<nil>
	W1007 13:36:03.430750 1760312 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:36:03.433798 1760312 out.go:177] * Restarting existing docker container for "ha-362969-m02" ...
	I1007 13:36:03.436559 1760312 cli_runner.go:164] Run: docker start ha-362969-m02
	I1007 13:36:03.711727 1760312 cli_runner.go:164] Run: docker container inspect ha-362969-m02 --format={{.State.Status}}
	I1007 13:36:03.731764 1760312 kic.go:430] container "ha-362969-m02" state is running.
	I1007 13:36:03.732119 1760312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-362969-m02
	I1007 13:36:03.756968 1760312 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/config.json ...
	I1007 13:36:03.757219 1760312 machine.go:93] provisionDockerMachine start ...
	I1007 13:36:03.757286 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969-m02
	I1007 13:36:03.781824 1760312 main.go:141] libmachine: Using SSH client type: native
	I1007 13:36:03.782068 1760312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38331 <nil> <nil>}
	I1007 13:36:03.782084 1760312 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:36:03.783700 1760312 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1007 13:36:06.967817 1760312 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-362969-m02
	
	I1007 13:36:06.967903 1760312 ubuntu.go:169] provisioning hostname "ha-362969-m02"
	I1007 13:36:06.967996 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969-m02
	I1007 13:36:06.994487 1760312 main.go:141] libmachine: Using SSH client type: native
	I1007 13:36:06.994740 1760312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38331 <nil> <nil>}
	I1007 13:36:06.994759 1760312 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-362969-m02 && echo "ha-362969-m02" | sudo tee /etc/hostname
	I1007 13:36:07.218016 1760312 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-362969-m02
	
	I1007 13:36:07.218135 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969-m02
	I1007 13:36:07.249220 1760312 main.go:141] libmachine: Using SSH client type: native
	I1007 13:36:07.249472 1760312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38331 <nil> <nil>}
	I1007 13:36:07.249490 1760312 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-362969-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-362969-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-362969-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:36:07.444925 1760312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:36:07.444988 1760312 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18424-1688750/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-1688750/.minikube}
	I1007 13:36:07.445026 1760312 ubuntu.go:177] setting up certificates
	I1007 13:36:07.445071 1760312 provision.go:84] configureAuth start
	I1007 13:36:07.445149 1760312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-362969-m02
	I1007 13:36:07.470719 1760312 provision.go:143] copyHostCerts
	I1007 13:36:07.470757 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.pem
	I1007 13:36:07.470786 1760312 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.pem, removing ...
	I1007 13:36:07.470793 1760312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.pem
	I1007 13:36:07.470866 1760312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.pem (1078 bytes)
	I1007 13:36:07.470944 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18424-1688750/.minikube/cert.pem
	I1007 13:36:07.470960 1760312 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-1688750/.minikube/cert.pem, removing ...
	I1007 13:36:07.470964 1760312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-1688750/.minikube/cert.pem
	I1007 13:36:07.470989 1760312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-1688750/.minikube/cert.pem (1123 bytes)
	I1007 13:36:07.471029 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18424-1688750/.minikube/key.pem
	I1007 13:36:07.471047 1760312 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-1688750/.minikube/key.pem, removing ...
	I1007 13:36:07.471051 1760312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-1688750/.minikube/key.pem
	I1007 13:36:07.471074 1760312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-1688750/.minikube/key.pem (1679 bytes)
	I1007 13:36:07.471118 1760312 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca-key.pem org=jenkins.ha-362969-m02 san=[127.0.0.1 192.168.49.3 ha-362969-m02 localhost minikube]
	I1007 13:36:07.897627 1760312 provision.go:177] copyRemoteCerts
	I1007 13:36:07.897743 1760312 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:36:07.897809 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969-m02
	I1007 13:36:07.916858 1760312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38331 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/ha-362969-m02/id_rsa Username:docker}
	I1007 13:36:08.052594 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 13:36:08.052658 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1007 13:36:08.112502 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 13:36:08.112575 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 13:36:08.141336 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 13:36:08.141427 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 13:36:08.169608 1760312 provision.go:87] duration metric: took 724.509735ms to configureAuth
	I1007 13:36:08.169639 1760312 ubuntu.go:193] setting minikube options for container-runtime
	I1007 13:36:08.169927 1760312 config.go:182] Loaded profile config "ha-362969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:36:08.170068 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969-m02
	I1007 13:36:08.193896 1760312 main.go:141] libmachine: Using SSH client type: native
	I1007 13:36:08.194142 1760312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38331 <nil> <nil>}
	I1007 13:36:08.194163 1760312 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:36:08.585503 1760312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:36:08.585527 1760312 machine.go:96] duration metric: took 4.828298024s to provisionDockerMachine
	I1007 13:36:08.585598 1760312 start.go:293] postStartSetup for "ha-362969-m02" (driver="docker")
	I1007 13:36:08.585617 1760312 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:36:08.585698 1760312 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:36:08.585762 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969-m02
	I1007 13:36:08.603025 1760312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38331 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/ha-362969-m02/id_rsa Username:docker}
	I1007 13:36:08.740305 1760312 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:36:08.763092 1760312 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 13:36:08.763138 1760312 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 13:36:08.763149 1760312 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 13:36:08.763156 1760312 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 13:36:08.763167 1760312 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-1688750/.minikube/addons for local assets ...
	I1007 13:36:08.763225 1760312 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-1688750/.minikube/files for local assets ...
	I1007 13:36:08.763302 1760312 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-1688750/.minikube/files/etc/ssl/certs/16941262.pem -> 16941262.pem in /etc/ssl/certs
	I1007 13:36:08.763315 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/files/etc/ssl/certs/16941262.pem -> /etc/ssl/certs/16941262.pem
	I1007 13:36:08.763418 1760312 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:36:08.798300 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/files/etc/ssl/certs/16941262.pem --> /etc/ssl/certs/16941262.pem (1708 bytes)
	I1007 13:36:08.849744 1760312 start.go:296] duration metric: took 264.124127ms for postStartSetup
	I1007 13:36:08.849842 1760312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 13:36:08.849881 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969-m02
	I1007 13:36:08.883300 1760312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38331 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/ha-362969-m02/id_rsa Username:docker}
	I1007 13:36:09.032059 1760312 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 13:36:09.048669 1760312 fix.go:56] duration metric: took 5.634473886s for fixHost
	I1007 13:36:09.048706 1760312 start.go:83] releasing machines lock for "ha-362969-m02", held for 5.634532954s
	I1007 13:36:09.048810 1760312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-362969-m02
	I1007 13:36:09.075131 1760312 out.go:177] * Found network options:
	I1007 13:36:09.077865 1760312 out.go:177]   - NO_PROXY=192.168.49.2
	W1007 13:36:09.080384 1760312 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 13:36:09.080436 1760312 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 13:36:09.080511 1760312 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:36:09.080561 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969-m02
	I1007 13:36:09.080852 1760312 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:36:09.080924 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969-m02
	I1007 13:36:09.119836 1760312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38331 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/ha-362969-m02/id_rsa Username:docker}
	I1007 13:36:09.120584 1760312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38331 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/ha-362969-m02/id_rsa Username:docker}
	I1007 13:36:09.449978 1760312 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 13:36:09.527033 1760312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:36:09.553968 1760312 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1007 13:36:09.554072 1760312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:36:09.592282 1760312 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 13:36:09.592309 1760312 start.go:495] detecting cgroup driver to use...
	I1007 13:36:09.592341 1760312 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 13:36:09.592423 1760312 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:36:09.635453 1760312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:36:09.703125 1760312 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:36:09.703231 1760312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:36:09.755594 1760312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:36:09.816341 1760312 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:36:10.134750 1760312 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:36:10.442031 1760312 docker.go:233] disabling docker service ...
	I1007 13:36:10.442141 1760312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:36:10.490412 1760312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:36:10.537673 1760312 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:36:10.836968 1760312 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:36:11.120987 1760312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:36:11.166943 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:36:11.223784 1760312 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:36:11.223934 1760312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:36:11.239585 1760312 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:36:11.239736 1760312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:36:11.255707 1760312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:36:11.273949 1760312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:36:11.320896 1760312 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:36:11.360149 1760312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:36:11.406600 1760312 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:36:11.471036 1760312 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:36:11.551981 1760312 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:36:11.584186 1760312 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:36:11.619626 1760312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:36:11.916744 1760312 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:36:12.432757 1760312 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:36:12.432838 1760312 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:36:12.440156 1760312 start.go:563] Will wait 60s for crictl version
	I1007 13:36:12.440255 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:36:12.444117 1760312 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:36:12.522006 1760312 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1007 13:36:12.522111 1760312 ssh_runner.go:195] Run: crio --version
	I1007 13:36:12.591934 1760312 ssh_runner.go:195] Run: crio --version
	I1007 13:36:12.683954 1760312 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1007 13:36:12.686490 1760312 out.go:177]   - env NO_PROXY=192.168.49.2
	I1007 13:36:12.689087 1760312 cli_runner.go:164] Run: docker network inspect ha-362969 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 13:36:12.707677 1760312 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1007 13:36:12.713131 1760312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:36:12.741005 1760312 mustload.go:65] Loading cluster: ha-362969
	I1007 13:36:12.741249 1760312 config.go:182] Loaded profile config "ha-362969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:36:12.741524 1760312 cli_runner.go:164] Run: docker container inspect ha-362969 --format={{.State.Status}}
	I1007 13:36:12.766336 1760312 host.go:66] Checking if "ha-362969" exists ...
	I1007 13:36:12.766608 1760312 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969 for IP: 192.168.49.3
	I1007 13:36:12.766616 1760312 certs.go:194] generating shared ca certs ...
	I1007 13:36:12.766630 1760312 certs.go:226] acquiring lock for ca certs: {Name:mk3a082a64706c071bb4db632f3ec05c7c14e01c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:36:12.766753 1760312 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.key
	I1007 13:36:12.766797 1760312 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.key
	I1007 13:36:12.766813 1760312 certs.go:256] generating profile certs ...
	I1007 13:36:12.766887 1760312 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/client.key
	I1007 13:36:12.766955 1760312 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/apiserver.key.7e6355d0
	I1007 13:36:12.766999 1760312 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/proxy-client.key
	I1007 13:36:12.767014 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 13:36:12.767032 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 13:36:12.767047 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 13:36:12.767065 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 13:36:12.767077 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 13:36:12.767092 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 13:36:12.767114 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 13:36:12.767125 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 13:36:12.767174 1760312 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/1694126.pem (1338 bytes)
	W1007 13:36:12.767205 1760312 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/1694126_empty.pem, impossibly tiny 0 bytes
	I1007 13:36:12.767217 1760312 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 13:36:12.767244 1760312 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem (1078 bytes)
	I1007 13:36:12.767274 1760312 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:36:12.767302 1760312 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/key.pem (1679 bytes)
	I1007 13:36:12.767350 1760312 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/files/etc/ssl/certs/16941262.pem (1708 bytes)
	I1007 13:36:12.767383 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/1694126.pem -> /usr/share/ca-certificates/1694126.pem
	I1007 13:36:12.767399 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/files/etc/ssl/certs/16941262.pem -> /usr/share/ca-certificates/16941262.pem
	I1007 13:36:12.767410 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:36:12.767475 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969
	I1007 13:36:12.802899 1760312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38326 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/ha-362969/id_rsa Username:docker}
	I1007 13:36:12.915840 1760312 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 13:36:12.926228 1760312 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 13:36:12.948159 1760312 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 13:36:12.952219 1760312 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1007 13:36:12.966831 1760312 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 13:36:12.977098 1760312 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 13:36:12.990432 1760312 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 13:36:12.994375 1760312 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 13:36:13.020254 1760312 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 13:36:13.028610 1760312 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 13:36:13.058504 1760312 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 13:36:13.071396 1760312 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1007 13:36:13.104333 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:36:13.141077 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:36:13.173033 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:36:13.208608 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 13:36:13.243053 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1007 13:36:13.280140 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 13:36:13.305404 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:36:13.335866 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 13:36:13.379998 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/1694126.pem --> /usr/share/ca-certificates/1694126.pem (1338 bytes)
	I1007 13:36:13.420932 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/files/etc/ssl/certs/16941262.pem --> /usr/share/ca-certificates/16941262.pem (1708 bytes)
	I1007 13:36:13.458659 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:36:13.504518 1760312 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 13:36:13.538049 1760312 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1007 13:36:13.560048 1760312 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 13:36:13.580128 1760312 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 13:36:13.603520 1760312 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 13:36:13.624415 1760312 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1007 13:36:13.653735 1760312 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 13:36:13.681550 1760312 ssh_runner.go:195] Run: openssl version
	I1007 13:36:13.688119 1760312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1694126.pem && ln -fs /usr/share/ca-certificates/1694126.pem /etc/ssl/certs/1694126.pem"
	I1007 13:36:13.701489 1760312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1694126.pem
	I1007 13:36:13.708149 1760312 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 13:22 /usr/share/ca-certificates/1694126.pem
	I1007 13:36:13.708268 1760312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1694126.pem
	I1007 13:36:13.721456 1760312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1694126.pem /etc/ssl/certs/51391683.0"
	I1007 13:36:13.733547 1760312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16941262.pem && ln -fs /usr/share/ca-certificates/16941262.pem /etc/ssl/certs/16941262.pem"
	I1007 13:36:13.753773 1760312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16941262.pem
	I1007 13:36:13.759647 1760312 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 13:22 /usr/share/ca-certificates/16941262.pem
	I1007 13:36:13.759770 1760312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16941262.pem
	I1007 13:36:13.767228 1760312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16941262.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:36:13.776606 1760312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:36:13.786268 1760312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:36:13.791640 1760312 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 13:04 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:36:13.791757 1760312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:36:13.801242 1760312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:36:13.810939 1760312 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:36:13.815096 1760312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 13:36:13.822324 1760312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 13:36:13.831480 1760312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 13:36:13.844511 1760312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 13:36:13.855352 1760312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 13:36:13.863522 1760312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 13:36:13.876331 1760312 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.1 crio true true} ...
	I1007 13:36:13.876493 1760312 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-362969-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-362969 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:36:13.876527 1760312 kube-vip.go:115] generating kube-vip config ...
	I1007 13:36:13.876596 1760312 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1007 13:36:13.893299 1760312 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 13:36:13.893386 1760312 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 13:36:13.893482 1760312 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:36:13.907820 1760312 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:36:13.907933 1760312 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 13:36:13.916726 1760312 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1007 13:36:13.947281 1760312 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:36:13.979304 1760312 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 13:36:14.012214 1760312 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1007 13:36:14.015990 1760312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:36:14.031486 1760312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:36:14.218202 1760312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:36:14.239957 1760312 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:36:14.240589 1760312 config.go:182] Loaded profile config "ha-362969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:36:14.243625 1760312 out.go:177] * Verifying Kubernetes components...
	I1007 13:36:14.246792 1760312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:36:14.424910 1760312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:36:14.446789 1760312 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18424-1688750/kubeconfig
	I1007 13:36:14.447065 1760312 kapi.go:59] client config for ha-362969: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/client.crt", KeyFile:"/home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/client.key", CAFile:"/home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e94a20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 13:36:14.447125 1760312 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1007 13:36:14.447353 1760312 node_ready.go:35] waiting up to 6m0s for node "ha-362969-m02" to be "Ready" ...
	I1007 13:36:14.447432 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m02
	I1007 13:36:14.447438 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:14.447447 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:14.447451 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:22.963308 1760312 round_trippers.go:574] Response Status: 500 Internal Server Error in 8515 milliseconds
	I1007 13:36:22.963556 1760312 node_ready.go:53] error getting node "ha-362969-m02": etcdserver: leader changed
	I1007 13:36:22.963617 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m02
	I1007 13:36:22.963630 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:22.963638 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:22.963642 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:22.966669 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:36:22.968104 1760312 node_ready.go:49] node "ha-362969-m02" has status "Ready":"True"
	I1007 13:36:22.968173 1760312 node_ready.go:38] duration metric: took 8.520806353s for node "ha-362969-m02" to be "Ready" ...
	I1007 13:36:22.968217 1760312 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:36:22.968301 1760312 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 13:36:22.968329 1760312 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 13:36:22.968420 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1007 13:36:22.968446 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:22.968478 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:22.968496 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:22.971115 1760312 round_trippers.go:574] Response Status: 429 Too Many Requests in 2 milliseconds
	I1007 13:36:23.971326 1760312 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1007 13:36:23.971392 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1007 13:36:23.971414 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:23.971422 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:23.971427 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:23.998105 1760312 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I1007 13:36:24.013211 1760312 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kjxj5" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:24.013414 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kjxj5
	I1007 13:36:24.013445 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:24.013467 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:24.013483 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:24.016986 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:36:24.017721 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:36:24.017737 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:24.017746 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:24.017750 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:24.021065 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:36:24.021512 1760312 pod_ready.go:93] pod "coredns-7c65d6cfc9-kjxj5" in "kube-system" namespace has status "Ready":"True"
	I1007 13:36:24.021525 1760312 pod_ready.go:82] duration metric: took 8.218061ms for pod "coredns-7c65d6cfc9-kjxj5" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:24.021537 1760312 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v7rpb" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:24.021603 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-v7rpb
	I1007 13:36:24.021607 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:24.021614 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:24.021619 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:24.024580 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:36:24.025804 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:36:24.025869 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:24.025896 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:24.025915 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:24.028977 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:36:24.029647 1760312 pod_ready.go:93] pod "coredns-7c65d6cfc9-v7rpb" in "kube-system" namespace has status "Ready":"True"
	I1007 13:36:24.029698 1760312 pod_ready.go:82] duration metric: took 8.144512ms for pod "coredns-7c65d6cfc9-v7rpb" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:24.029724 1760312 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-362969" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:24.029819 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-362969
	I1007 13:36:24.029853 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:24.029875 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:24.029892 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:24.033131 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:36:24.033960 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:36:24.034014 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:24.034039 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:24.034057 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:24.037305 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:36:24.038362 1760312 pod_ready.go:93] pod "etcd-ha-362969" in "kube-system" namespace has status "Ready":"True"
	I1007 13:36:24.038429 1760312 pod_ready.go:82] duration metric: took 8.674763ms for pod "etcd-ha-362969" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:24.038455 1760312 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-362969-m02" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:24.038554 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-362969-m02
	I1007 13:36:24.038590 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:24.038611 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:24.038628 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:24.044487 1760312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 13:36:24.045235 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m02
	I1007 13:36:24.045285 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:24.045308 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:24.045329 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:24.049106 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:36:24.049707 1760312 pod_ready.go:93] pod "etcd-ha-362969-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 13:36:24.049745 1760312 pod_ready.go:82] duration metric: took 11.268833ms for pod "etcd-ha-362969-m02" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:24.049770 1760312 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-362969-m03" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:24.049886 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-362969-m03
	I1007 13:36:24.049914 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:24.049935 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:24.049969 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:24.053786 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:36:24.172272 1760312 request.go:632] Waited for 117.275255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969-m03
	I1007 13:36:24.172442 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m03
	I1007 13:36:24.172480 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:24.172503 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:24.172521 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:24.174994 1760312 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1007 13:36:24.175188 1760312 pod_ready.go:98] node "ha-362969-m03" hosting pod "etcd-ha-362969-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-362969-m03": nodes "ha-362969-m03" not found
	I1007 13:36:24.175204 1760312 pod_ready.go:82] duration metric: took 125.401756ms for pod "etcd-ha-362969-m03" in "kube-system" namespace to be "Ready" ...
	E1007 13:36:24.175215 1760312 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-362969-m03" hosting pod "etcd-ha-362969-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-362969-m03": nodes "ha-362969-m03" not found
	I1007 13:36:24.175235 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-362969" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:24.371694 1760312 request.go:632] Waited for 196.382766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:36:24.371764 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:36:24.371774 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:24.371783 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:24.371799 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:24.374656 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:36:24.571997 1760312 request.go:632] Waited for 192.156524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:36:24.572056 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:36:24.572065 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:24.572076 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:24.572086 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:24.577249 1760312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 13:36:24.577730 1760312 pod_ready.go:93] pod "kube-apiserver-ha-362969" in "kube-system" namespace has status "Ready":"True"
	I1007 13:36:24.577751 1760312 pod_ready.go:82] duration metric: took 402.504162ms for pod "kube-apiserver-ha-362969" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:24.577762 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-362969-m02" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:24.772236 1760312 request.go:632] Waited for 194.406643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969-m02
	I1007 13:36:24.772322 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969-m02
	I1007 13:36:24.772375 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:24.772389 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:24.772394 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:24.782406 1760312 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 13:36:24.971409 1760312 request.go:632] Waited for 188.257175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969-m02
	I1007 13:36:24.971481 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m02
	I1007 13:36:24.971488 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:24.971509 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:24.971518 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:24.974180 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:36:24.974757 1760312 pod_ready.go:93] pod "kube-apiserver-ha-362969-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 13:36:24.974779 1760312 pod_ready.go:82] duration metric: took 397.008979ms for pod "kube-apiserver-ha-362969-m02" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:24.974791 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-362969-m03" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:25.172222 1760312 request.go:632] Waited for 197.336416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969-m03
	I1007 13:36:25.172339 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969-m03
	I1007 13:36:25.172404 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:25.172439 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:25.172484 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:25.176714 1760312 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 13:36:25.371969 1760312 request.go:632] Waited for 194.328556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969-m03
	I1007 13:36:25.372079 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m03
	I1007 13:36:25.372097 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:25.372107 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:25.372111 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:25.374519 1760312 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1007 13:36:25.374703 1760312 pod_ready.go:98] node "ha-362969-m03" hosting pod "kube-apiserver-ha-362969-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-362969-m03": nodes "ha-362969-m03" not found
	I1007 13:36:25.374723 1760312 pod_ready.go:82] duration metric: took 399.897072ms for pod "kube-apiserver-ha-362969-m03" in "kube-system" namespace to be "Ready" ...
	E1007 13:36:25.374748 1760312 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-362969-m03" hosting pod "kube-apiserver-ha-362969-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-362969-m03": nodes "ha-362969-m03" not found
	I1007 13:36:25.374764 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-362969" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:25.572335 1760312 request.go:632] Waited for 197.494788ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-362969
	I1007 13:36:25.572398 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-362969
	I1007 13:36:25.572411 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:25.572421 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:25.572433 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:25.575244 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:36:25.772131 1760312 request.go:632] Waited for 196.147752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:36:25.772255 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:36:25.772267 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:25.772288 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:25.772300 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:25.774819 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:36:25.775632 1760312 pod_ready.go:93] pod "kube-controller-manager-ha-362969" in "kube-system" namespace has status "Ready":"True"
	I1007 13:36:25.775653 1760312 pod_ready.go:82] duration metric: took 400.878291ms for pod "kube-controller-manager-ha-362969" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:25.775665 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-362969-m02" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:25.972009 1760312 request.go:632] Waited for 196.277767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-362969-m02
	I1007 13:36:25.972098 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-362969-m02
	I1007 13:36:25.972149 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:25.972161 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:25.972165 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:25.974926 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:36:26.172065 1760312 request.go:632] Waited for 196.3715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969-m02
	I1007 13:36:26.172176 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m02
	I1007 13:36:26.172208 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:26.172234 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:26.172254 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:26.175143 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:36:26.176115 1760312 pod_ready.go:93] pod "kube-controller-manager-ha-362969-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 13:36:26.176175 1760312 pod_ready.go:82] duration metric: took 400.500856ms for pod "kube-controller-manager-ha-362969-m02" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:26.176200 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-362969-m03" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:26.371578 1760312 request.go:632] Waited for 195.279917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-362969-m03
	I1007 13:36:26.371656 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-362969-m03
	I1007 13:36:26.371665 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:26.371682 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:26.371690 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:26.375220 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:36:26.571810 1760312 request.go:632] Waited for 195.174681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969-m03
	I1007 13:36:26.571879 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m03
	I1007 13:36:26.571889 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:26.571899 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:26.571907 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:26.574817 1760312 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1007 13:36:26.575124 1760312 pod_ready.go:98] node "ha-362969-m03" hosting pod "kube-controller-manager-ha-362969-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-362969-m03": nodes "ha-362969-m03" not found
	I1007 13:36:26.575151 1760312 pod_ready.go:82] duration metric: took 398.929711ms for pod "kube-controller-manager-ha-362969-m03" in "kube-system" namespace to be "Ready" ...
	E1007 13:36:26.575167 1760312 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-362969-m03" hosting pod "kube-controller-manager-ha-362969-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-362969-m03": nodes "ha-362969-m03" not found
	I1007 13:36:26.575199 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4gzvf" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:26.771345 1760312 request.go:632] Waited for 196.061863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gzvf
	I1007 13:36:26.771474 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4gzvf
	I1007 13:36:26.771524 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:26.771562 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:26.771579 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:26.777681 1760312 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 13:36:26.972302 1760312 request.go:632] Waited for 193.266305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969-m03
	I1007 13:36:26.972416 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m03
	I1007 13:36:26.972458 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:26.972490 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:26.972516 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:26.975673 1760312 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I1007 13:36:26.976128 1760312 pod_ready.go:98] node "ha-362969-m03" hosting pod "kube-proxy-4gzvf" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-362969-m03": nodes "ha-362969-m03" not found
	I1007 13:36:26.976174 1760312 pod_ready.go:82] duration metric: took 400.959068ms for pod "kube-proxy-4gzvf" in "kube-system" namespace to be "Ready" ...
	E1007 13:36:26.976214 1760312 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-362969-m03" hosting pod "kube-proxy-4gzvf" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-362969-m03": nodes "ha-362969-m03" not found
	I1007 13:36:26.976246 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jwdpx" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:27.171677 1760312 request.go:632] Waited for 195.314969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jwdpx
	I1007 13:36:27.171807 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jwdpx
	I1007 13:36:27.171814 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:27.171823 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:27.171830 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:27.177238 1760312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 13:36:27.371893 1760312 request.go:632] Waited for 192.140623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969-m04
	I1007 13:36:27.372001 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m04
	I1007 13:36:27.372021 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:27.372056 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:27.372077 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:27.376438 1760312 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 13:36:27.377502 1760312 pod_ready.go:93] pod "kube-proxy-jwdpx" in "kube-system" namespace has status "Ready":"True"
	I1007 13:36:27.377559 1760312 pod_ready.go:82] duration metric: took 401.281128ms for pod "kube-proxy-jwdpx" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:27.377597 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qxlrd" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:27.571676 1760312 request.go:632] Waited for 193.996051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qxlrd
	I1007 13:36:27.571865 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qxlrd
	I1007 13:36:27.571919 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:27.572019 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:27.572043 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:27.578342 1760312 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 13:36:27.771866 1760312 request.go:632] Waited for 192.187514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969-m02
	I1007 13:36:27.771975 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m02
	I1007 13:36:27.771997 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:27.772036 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:27.772055 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:27.775301 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:36:27.776194 1760312 pod_ready.go:93] pod "kube-proxy-qxlrd" in "kube-system" namespace has status "Ready":"True"
	I1007 13:36:27.776218 1760312 pod_ready.go:82] duration metric: took 398.597649ms for pod "kube-proxy-qxlrd" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:27.776230 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vxzkt" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:27.972047 1760312 request.go:632] Waited for 195.725535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vxzkt
	I1007 13:36:27.972142 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vxzkt
	I1007 13:36:27.972206 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:27.972216 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:27.972233 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:27.976951 1760312 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 13:36:28.171968 1760312 request.go:632] Waited for 194.318292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:36:28.172046 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:36:28.172071 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:28.172086 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:28.172091 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:28.181793 1760312 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 13:36:28.182401 1760312 pod_ready.go:93] pod "kube-proxy-vxzkt" in "kube-system" namespace has status "Ready":"True"
	I1007 13:36:28.182423 1760312 pod_ready.go:82] duration metric: took 406.185039ms for pod "kube-proxy-vxzkt" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:28.182434 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-362969" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:28.371777 1760312 request.go:632] Waited for 189.24732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-362969
	I1007 13:36:28.371894 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-362969
	I1007 13:36:28.371933 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:28.371955 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:28.371973 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:28.374810 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:36:28.572175 1760312 request.go:632] Waited for 196.31359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:36:28.572285 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:36:28.572324 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:28.572350 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:28.572368 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:28.575736 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:36:28.576444 1760312 pod_ready.go:93] pod "kube-scheduler-ha-362969" in "kube-system" namespace has status "Ready":"True"
	I1007 13:36:28.576467 1760312 pod_ready.go:82] duration metric: took 394.024955ms for pod "kube-scheduler-ha-362969" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:28.576479 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-362969-m02" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:28.771438 1760312 request.go:632] Waited for 194.887344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-362969-m02
	I1007 13:36:28.771553 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-362969-m02
	I1007 13:36:28.771566 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:28.771576 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:28.771588 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:28.774355 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:36:28.972249 1760312 request.go:632] Waited for 197.322254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969-m02
	I1007 13:36:28.972322 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m02
	I1007 13:36:28.972334 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:28.972343 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:28.972358 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:28.975566 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:36:28.976139 1760312 pod_ready.go:93] pod "kube-scheduler-ha-362969-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 13:36:28.976160 1760312 pod_ready.go:82] duration metric: took 399.673996ms for pod "kube-scheduler-ha-362969-m02" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:28.976189 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-362969-m03" in "kube-system" namespace to be "Ready" ...
	I1007 13:36:29.172133 1760312 request.go:632] Waited for 195.86748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-362969-m03
	I1007 13:36:29.172260 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-362969-m03
	I1007 13:36:29.172299 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:29.172332 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:29.172351 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:29.175332 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:36:29.372273 1760312 request.go:632] Waited for 196.319965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969-m03
	I1007 13:36:29.372330 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m03
	I1007 13:36:29.372337 1760312 round_trippers.go:469] Request Headers:
	I1007 13:36:29.372344 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:36:29.372367 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:36:29.374982 1760312 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1007 13:36:29.375132 1760312 pod_ready.go:98] node "ha-362969-m03" hosting pod "kube-scheduler-ha-362969-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-362969-m03": nodes "ha-362969-m03" not found
	I1007 13:36:29.375150 1760312 pod_ready.go:82] duration metric: took 398.945046ms for pod "kube-scheduler-ha-362969-m03" in "kube-system" namespace to be "Ready" ...
	E1007 13:36:29.375160 1760312 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-362969-m03" hosting pod "kube-scheduler-ha-362969-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-362969-m03": nodes "ha-362969-m03" not found
	I1007 13:36:29.375184 1760312 pod_ready.go:39] duration metric: took 6.406925324s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:36:29.375202 1760312 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:36:29.375267 1760312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:29.875882 1760312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:30.375860 1760312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:30.875517 1760312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:31.376149 1760312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:31.875652 1760312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:32.375951 1760312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:32.875984 1760312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:33.375606 1760312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:33.875636 1760312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:34.375426 1760312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:34.875376 1760312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:36:34.886597 1760312 api_server.go:72] duration metric: took 20.646595887s to wait for apiserver process to appear ...
	I1007 13:36:34.886625 1760312 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:36:34.886647 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:34.894726 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:34.894753 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:35.387333 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:35.395466 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:35.395491 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:35.887394 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:35.895565 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:35.895596 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:36.386767 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:36.394452 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:36.394487 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:36.886758 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:36.894499 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:36.894532 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:37.386859 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:37.394337 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:37.394364 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:37.886964 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:37.895438 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:37.895469 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:38.387094 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:38.394883 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:38.394913 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:38.887508 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:38.895291 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:38.895327 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:39.386715 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:39.394763 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:39.394792 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:39.886879 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:39.894943 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:39.894981 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:40.387590 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:40.395249 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:40.395292 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:40.887258 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:40.894919 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:40.894949 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:41.387597 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:41.395768 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:41.395798 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:41.886983 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:41.895268 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:41.895308 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:42.386826 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:42.394843 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:42.394873 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:42.887519 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:42.902876 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:42.902913 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:43.386728 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:43.394357 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:43.394384 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:43.886786 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:43.895412 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:43.895459 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:44.387171 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:44.395242 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:44.395279 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:44.886821 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:44.895444 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:44.895475 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:45.386983 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:45.394721 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:45.394753 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:45.887496 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:45.895586 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:45.895668 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:46.386786 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:46.394767 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:46.394804 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:46.886861 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:46.894616 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:46.894656 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:47.387157 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:47.394778 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:47.394808 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:47.887456 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:47.895356 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:47.895404 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:48.386778 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:48.394689 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:48.394720 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:48.887143 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:48.896468 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:48.896519 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:49.386940 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:49.396607 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:49.396688 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:49.886960 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:49.894902 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:49.894948 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:50.387611 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:50.397075 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:50.397105 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:50.886771 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:50.894897 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:50.894940 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:51.387096 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:51.394900 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:51.394928 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:51.887589 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:51.895442 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:51.895473 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:52.387644 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:52.395363 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:52.395403 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:52.886775 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:52.894420 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:52.894453 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:53.387182 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:53.397972 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:53.398002 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:53.887612 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:53.906462 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:53.906497 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:54.387055 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:54.397589 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:54.397633 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:54.886758 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:54.904345 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:54.904389 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:55.386757 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:55.395447 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:55.395483 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:55.887438 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:55.907987 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:55.908026 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:56.387641 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:56.398580 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:56.398608 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:56.886875 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:56.895041 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:56.895084 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:57.387699 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:57.395450 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:57.395479 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:57.887610 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:57.896657 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:57.896684 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:58.386915 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:58.394646 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:58.394677 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:58.887178 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:58.894984 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:58.895018 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:59.387686 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:59.396357 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:59.396392 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:36:59.886713 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:36:59.894396 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:36:59.894449 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:00.386966 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:00.395158 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:00.395197 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:00.886785 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:00.894438 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:00.894482 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:01.386785 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:01.396195 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:01.396238 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:01.886776 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:01.894797 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:01.894833 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:02.387492 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:02.395511 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:02.395587 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:02.887311 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:02.900108 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:02.900151 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:03.387488 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:03.395282 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:03.395311 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:03.886807 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:03.894403 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:03.894431 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:04.386963 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:04.394810 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:04.394839 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:04.887114 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:04.896400 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:04.896436 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:05.386767 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:05.394362 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:05.394392 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:05.886917 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:05.894506 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:05.894534 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:06.386887 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:06.396477 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:06.396515 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:06.886958 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:06.894862 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:06.894895 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:07.387465 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:07.395322 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:07.395366 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:07.886794 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:07.895764 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:07.895792 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:08.387357 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:08.395168 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:08.395202 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:08.886770 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:08.894369 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:08.894395 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:09.386775 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:09.394343 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:09.394380 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:09.886775 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:09.894524 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:09.894559 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:10.387128 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:10.394873 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:10.394899 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:10.886686 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:10.894334 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:10.894365 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:11.387572 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:11.395437 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:11.395472 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:11.886787 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:11.895306 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:11.895346 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:12.386830 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:12.394486 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:12.394515 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:12.887144 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:12.894843 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:12.894873 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:13.387246 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:13.395065 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:13.395093 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:13.887694 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:13.895584 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:13.895611 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:14.387188 1760312 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:14.387292 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:14.427566 1760312 cri.go:89] found id: "e8000ec26459b7aad5ed8b44f2048bb47b5c2e5646163694630298cc2300e8a0"
	I1007 13:37:14.427590 1760312 cri.go:89] found id: "3b0d066c45a241f7b2d17d7d70677c6239f78af5c5d52b9f7d13bc5a17a01625"
	I1007 13:37:14.427596 1760312 cri.go:89] found id: ""
	I1007 13:37:14.427602 1760312 logs.go:282] 2 containers: [e8000ec26459b7aad5ed8b44f2048bb47b5c2e5646163694630298cc2300e8a0 3b0d066c45a241f7b2d17d7d70677c6239f78af5c5d52b9f7d13bc5a17a01625]
	I1007 13:37:14.427667 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:14.431416 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:14.434846 1760312 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:14.434941 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:14.484972 1760312 cri.go:89] found id: "c04172c48f9d67d1f3a08064496ac572adc52eab6e18bf5c347b1ba681701baf"
	I1007 13:37:14.484993 1760312 cri.go:89] found id: "18a5dd2fe672b09013f2b9fed5dd79d02ec1a26b2545bd70e81c9ec3713bef2d"
	I1007 13:37:14.484998 1760312 cri.go:89] found id: ""
	I1007 13:37:14.485005 1760312 logs.go:282] 2 containers: [c04172c48f9d67d1f3a08064496ac572adc52eab6e18bf5c347b1ba681701baf 18a5dd2fe672b09013f2b9fed5dd79d02ec1a26b2545bd70e81c9ec3713bef2d]
	I1007 13:37:14.485063 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:14.488846 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:14.492123 1760312 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:14.492201 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:14.534166 1760312 cri.go:89] found id: ""
	I1007 13:37:14.534193 1760312 logs.go:282] 0 containers: []
	W1007 13:37:14.534203 1760312 logs.go:284] No container was found matching "coredns"
	I1007 13:37:14.534210 1760312 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:14.534340 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:14.573712 1760312 cri.go:89] found id: "2466a07d87bfdd5c36327bb7160a98d96585acb8b586b9d8f300d0080739e122"
	I1007 13:37:14.573733 1760312 cri.go:89] found id: "1e026a69d9307f214aba33fe7fc46446e93ecebc8c44fe9daa5210a14ecc1dee"
	I1007 13:37:14.573738 1760312 cri.go:89] found id: ""
	I1007 13:37:14.573745 1760312 logs.go:282] 2 containers: [2466a07d87bfdd5c36327bb7160a98d96585acb8b586b9d8f300d0080739e122 1e026a69d9307f214aba33fe7fc46446e93ecebc8c44fe9daa5210a14ecc1dee]
	I1007 13:37:14.573801 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:14.577362 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:14.581051 1760312 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:14.581162 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:14.618985 1760312 cri.go:89] found id: "aace5952020afc8f3e4244e29dadf8fbcab1d115d70838ecce5fa63eed6f7c59"
	I1007 13:37:14.619055 1760312 cri.go:89] found id: ""
	I1007 13:37:14.619077 1760312 logs.go:282] 1 containers: [aace5952020afc8f3e4244e29dadf8fbcab1d115d70838ecce5fa63eed6f7c59]
	I1007 13:37:14.619162 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:14.623632 1760312 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:14.623755 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:14.667150 1760312 cri.go:89] found id: "38ceffbcfed076af448bc5e2611547cc18e11104c67b228afbc2dee1c0bc2b9a"
	I1007 13:37:14.667180 1760312 cri.go:89] found id: "f7aa91e2995aa8dde7ecf57a2b2283f4ea2624262e6f83bc670d67e08441b315"
	I1007 13:37:14.667188 1760312 cri.go:89] found id: ""
	I1007 13:37:14.667196 1760312 logs.go:282] 2 containers: [38ceffbcfed076af448bc5e2611547cc18e11104c67b228afbc2dee1c0bc2b9a f7aa91e2995aa8dde7ecf57a2b2283f4ea2624262e6f83bc670d67e08441b315]
	I1007 13:37:14.667323 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:14.671216 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:14.675462 1760312 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:14.675646 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:14.719722 1760312 cri.go:89] found id: "23d16beac640b9def661d0bc58ee76936a6953a8db6100ce34bdc1ab29d605fa"
	I1007 13:37:14.719747 1760312 cri.go:89] found id: ""
	I1007 13:37:14.719755 1760312 logs.go:282] 1 containers: [23d16beac640b9def661d0bc58ee76936a6953a8db6100ce34bdc1ab29d605fa]
	I1007 13:37:14.719816 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:14.723184 1760312 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:14.723208 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:14.813840 1760312 logs.go:123] Gathering logs for kube-apiserver [e8000ec26459b7aad5ed8b44f2048bb47b5c2e5646163694630298cc2300e8a0] ...
	I1007 13:37:14.813877 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8000ec26459b7aad5ed8b44f2048bb47b5c2e5646163694630298cc2300e8a0"
	I1007 13:37:14.862271 1760312 logs.go:123] Gathering logs for kube-apiserver [3b0d066c45a241f7b2d17d7d70677c6239f78af5c5d52b9f7d13bc5a17a01625] ...
	I1007 13:37:14.862307 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b0d066c45a241f7b2d17d7d70677c6239f78af5c5d52b9f7d13bc5a17a01625"
	I1007 13:37:14.912235 1760312 logs.go:123] Gathering logs for etcd [18a5dd2fe672b09013f2b9fed5dd79d02ec1a26b2545bd70e81c9ec3713bef2d] ...
	I1007 13:37:14.912276 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18a5dd2fe672b09013f2b9fed5dd79d02ec1a26b2545bd70e81c9ec3713bef2d"
	I1007 13:37:14.966864 1760312 logs.go:123] Gathering logs for container status ...
	I1007 13:37:14.966900 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:15.018547 1760312 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:15.018606 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 13:37:15.295772 1760312 logs.go:123] Gathering logs for kube-scheduler [1e026a69d9307f214aba33fe7fc46446e93ecebc8c44fe9daa5210a14ecc1dee] ...
	I1007 13:37:15.295803 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e026a69d9307f214aba33fe7fc46446e93ecebc8c44fe9daa5210a14ecc1dee"
	I1007 13:37:15.344215 1760312 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:15.344246 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:15.436100 1760312 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:15.436199 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:15.458299 1760312 logs.go:123] Gathering logs for etcd [c04172c48f9d67d1f3a08064496ac572adc52eab6e18bf5c347b1ba681701baf] ...
	I1007 13:37:15.458328 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c04172c48f9d67d1f3a08064496ac572adc52eab6e18bf5c347b1ba681701baf"
	I1007 13:37:15.532849 1760312 logs.go:123] Gathering logs for kube-scheduler [2466a07d87bfdd5c36327bb7160a98d96585acb8b586b9d8f300d0080739e122] ...
	I1007 13:37:15.532944 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2466a07d87bfdd5c36327bb7160a98d96585acb8b586b9d8f300d0080739e122"
	I1007 13:37:15.612635 1760312 logs.go:123] Gathering logs for kube-proxy [aace5952020afc8f3e4244e29dadf8fbcab1d115d70838ecce5fa63eed6f7c59] ...
	I1007 13:37:15.612674 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aace5952020afc8f3e4244e29dadf8fbcab1d115d70838ecce5fa63eed6f7c59"
	I1007 13:37:15.674196 1760312 logs.go:123] Gathering logs for kube-controller-manager [38ceffbcfed076af448bc5e2611547cc18e11104c67b228afbc2dee1c0bc2b9a] ...
	I1007 13:37:15.674272 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38ceffbcfed076af448bc5e2611547cc18e11104c67b228afbc2dee1c0bc2b9a"
	I1007 13:37:15.734078 1760312 logs.go:123] Gathering logs for kube-controller-manager [f7aa91e2995aa8dde7ecf57a2b2283f4ea2624262e6f83bc670d67e08441b315] ...
	I1007 13:37:15.734113 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7aa91e2995aa8dde7ecf57a2b2283f4ea2624262e6f83bc670d67e08441b315"
	I1007 13:37:15.771173 1760312 logs.go:123] Gathering logs for kindnet [23d16beac640b9def661d0bc58ee76936a6953a8db6100ce34bdc1ab29d605fa] ...
	I1007 13:37:15.771202 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23d16beac640b9def661d0bc58ee76936a6953a8db6100ce34bdc1ab29d605fa"
	I1007 13:37:18.319071 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:19.699192 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 13:37:19.699221 1760312 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 13:37:19.699253 1760312 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:19.699321 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:19.773696 1760312 cri.go:89] found id: "e8000ec26459b7aad5ed8b44f2048bb47b5c2e5646163694630298cc2300e8a0"
	I1007 13:37:19.773717 1760312 cri.go:89] found id: "3b0d066c45a241f7b2d17d7d70677c6239f78af5c5d52b9f7d13bc5a17a01625"
	I1007 13:37:19.773722 1760312 cri.go:89] found id: ""
	I1007 13:37:19.773730 1760312 logs.go:282] 2 containers: [e8000ec26459b7aad5ed8b44f2048bb47b5c2e5646163694630298cc2300e8a0 3b0d066c45a241f7b2d17d7d70677c6239f78af5c5d52b9f7d13bc5a17a01625]
	I1007 13:37:19.773790 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:19.777922 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:19.781698 1760312 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:19.781770 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:19.831949 1760312 cri.go:89] found id: "c04172c48f9d67d1f3a08064496ac572adc52eab6e18bf5c347b1ba681701baf"
	I1007 13:37:19.832014 1760312 cri.go:89] found id: "18a5dd2fe672b09013f2b9fed5dd79d02ec1a26b2545bd70e81c9ec3713bef2d"
	I1007 13:37:19.832035 1760312 cri.go:89] found id: ""
	I1007 13:37:19.832050 1760312 logs.go:282] 2 containers: [c04172c48f9d67d1f3a08064496ac572adc52eab6e18bf5c347b1ba681701baf 18a5dd2fe672b09013f2b9fed5dd79d02ec1a26b2545bd70e81c9ec3713bef2d]
	I1007 13:37:19.832127 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:19.836013 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:19.839419 1760312 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:19.839584 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:19.878167 1760312 cri.go:89] found id: ""
	I1007 13:37:19.878189 1760312 logs.go:282] 0 containers: []
	W1007 13:37:19.878198 1760312 logs.go:284] No container was found matching "coredns"
	I1007 13:37:19.878204 1760312 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:19.878267 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:19.916618 1760312 cri.go:89] found id: "2466a07d87bfdd5c36327bb7160a98d96585acb8b586b9d8f300d0080739e122"
	I1007 13:37:19.916640 1760312 cri.go:89] found id: "1e026a69d9307f214aba33fe7fc46446e93ecebc8c44fe9daa5210a14ecc1dee"
	I1007 13:37:19.916645 1760312 cri.go:89] found id: ""
	I1007 13:37:19.916652 1760312 logs.go:282] 2 containers: [2466a07d87bfdd5c36327bb7160a98d96585acb8b586b9d8f300d0080739e122 1e026a69d9307f214aba33fe7fc46446e93ecebc8c44fe9daa5210a14ecc1dee]
	I1007 13:37:19.916709 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:19.920363 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:19.923820 1760312 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:19.923892 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:19.965737 1760312 cri.go:89] found id: "aace5952020afc8f3e4244e29dadf8fbcab1d115d70838ecce5fa63eed6f7c59"
	I1007 13:37:19.965759 1760312 cri.go:89] found id: ""
	I1007 13:37:19.965770 1760312 logs.go:282] 1 containers: [aace5952020afc8f3e4244e29dadf8fbcab1d115d70838ecce5fa63eed6f7c59]
	I1007 13:37:19.965824 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:19.969619 1760312 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:19.969716 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:20.005476 1760312 cri.go:89] found id: "38ceffbcfed076af448bc5e2611547cc18e11104c67b228afbc2dee1c0bc2b9a"
	I1007 13:37:20.005499 1760312 cri.go:89] found id: "f7aa91e2995aa8dde7ecf57a2b2283f4ea2624262e6f83bc670d67e08441b315"
	I1007 13:37:20.005504 1760312 cri.go:89] found id: ""
	I1007 13:37:20.005511 1760312 logs.go:282] 2 containers: [38ceffbcfed076af448bc5e2611547cc18e11104c67b228afbc2dee1c0bc2b9a f7aa91e2995aa8dde7ecf57a2b2283f4ea2624262e6f83bc670d67e08441b315]
	I1007 13:37:20.005580 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:20.010974 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:20.017355 1760312 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:20.017481 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:20.061526 1760312 cri.go:89] found id: "23d16beac640b9def661d0bc58ee76936a6953a8db6100ce34bdc1ab29d605fa"
	I1007 13:37:20.061550 1760312 cri.go:89] found id: ""
	I1007 13:37:20.061558 1760312 logs.go:282] 1 containers: [23d16beac640b9def661d0bc58ee76936a6953a8db6100ce34bdc1ab29d605fa]
	I1007 13:37:20.061637 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:20.065601 1760312 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:20.065630 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 13:37:20.483362 1760312 logs.go:123] Gathering logs for container status ...
	I1007 13:37:20.483406 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:20.573478 1760312 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:20.573512 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:20.601739 1760312 logs.go:123] Gathering logs for kube-apiserver [e8000ec26459b7aad5ed8b44f2048bb47b5c2e5646163694630298cc2300e8a0] ...
	I1007 13:37:20.601770 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8000ec26459b7aad5ed8b44f2048bb47b5c2e5646163694630298cc2300e8a0"
	I1007 13:37:20.674329 1760312 logs.go:123] Gathering logs for kube-apiserver [3b0d066c45a241f7b2d17d7d70677c6239f78af5c5d52b9f7d13bc5a17a01625] ...
	I1007 13:37:20.674362 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b0d066c45a241f7b2d17d7d70677c6239f78af5c5d52b9f7d13bc5a17a01625"
	I1007 13:37:20.724797 1760312 logs.go:123] Gathering logs for etcd [18a5dd2fe672b09013f2b9fed5dd79d02ec1a26b2545bd70e81c9ec3713bef2d] ...
	I1007 13:37:20.724827 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18a5dd2fe672b09013f2b9fed5dd79d02ec1a26b2545bd70e81c9ec3713bef2d"
	I1007 13:37:20.789731 1760312 logs.go:123] Gathering logs for kube-scheduler [1e026a69d9307f214aba33fe7fc46446e93ecebc8c44fe9daa5210a14ecc1dee] ...
	I1007 13:37:20.789770 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e026a69d9307f214aba33fe7fc46446e93ecebc8c44fe9daa5210a14ecc1dee"
	I1007 13:37:20.882445 1760312 logs.go:123] Gathering logs for kube-proxy [aace5952020afc8f3e4244e29dadf8fbcab1d115d70838ecce5fa63eed6f7c59] ...
	I1007 13:37:20.882478 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aace5952020afc8f3e4244e29dadf8fbcab1d115d70838ecce5fa63eed6f7c59"
	I1007 13:37:20.929318 1760312 logs.go:123] Gathering logs for kube-controller-manager [f7aa91e2995aa8dde7ecf57a2b2283f4ea2624262e6f83bc670d67e08441b315] ...
	I1007 13:37:20.929392 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7aa91e2995aa8dde7ecf57a2b2283f4ea2624262e6f83bc670d67e08441b315"
	I1007 13:37:20.967235 1760312 logs.go:123] Gathering logs for kindnet [23d16beac640b9def661d0bc58ee76936a6953a8db6100ce34bdc1ab29d605fa] ...
	I1007 13:37:20.967265 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23d16beac640b9def661d0bc58ee76936a6953a8db6100ce34bdc1ab29d605fa"
	I1007 13:37:21.016382 1760312 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:21.016413 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:21.093791 1760312 logs.go:123] Gathering logs for etcd [c04172c48f9d67d1f3a08064496ac572adc52eab6e18bf5c347b1ba681701baf] ...
	I1007 13:37:21.093828 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c04172c48f9d67d1f3a08064496ac572adc52eab6e18bf5c347b1ba681701baf"
	I1007 13:37:21.148980 1760312 logs.go:123] Gathering logs for kube-scheduler [2466a07d87bfdd5c36327bb7160a98d96585acb8b586b9d8f300d0080739e122] ...
	I1007 13:37:21.149018 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2466a07d87bfdd5c36327bb7160a98d96585acb8b586b9d8f300d0080739e122"
	I1007 13:37:21.233421 1760312 logs.go:123] Gathering logs for kube-controller-manager [38ceffbcfed076af448bc5e2611547cc18e11104c67b228afbc2dee1c0bc2b9a] ...
	I1007 13:37:21.233462 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38ceffbcfed076af448bc5e2611547cc18e11104c67b228afbc2dee1c0bc2b9a"
	I1007 13:37:21.302975 1760312 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:21.303015 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:23.899838 1760312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 13:37:23.908863 1760312 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1007 13:37:23.908940 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I1007 13:37:23.908946 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:23.908956 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:23.908960 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:23.923195 1760312 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1007 13:37:23.923308 1760312 api_server.go:141] control plane version: v1.31.1
	I1007 13:37:23.923328 1760312 api_server.go:131] duration metric: took 49.036696701s to wait for apiserver health ...
	I1007 13:37:23.923336 1760312 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:37:23.923364 1760312 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:37:23.923426 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:37:23.960782 1760312 cri.go:89] found id: "e8000ec26459b7aad5ed8b44f2048bb47b5c2e5646163694630298cc2300e8a0"
	I1007 13:37:23.960805 1760312 cri.go:89] found id: "3b0d066c45a241f7b2d17d7d70677c6239f78af5c5d52b9f7d13bc5a17a01625"
	I1007 13:37:23.960810 1760312 cri.go:89] found id: ""
	I1007 13:37:23.960830 1760312 logs.go:282] 2 containers: [e8000ec26459b7aad5ed8b44f2048bb47b5c2e5646163694630298cc2300e8a0 3b0d066c45a241f7b2d17d7d70677c6239f78af5c5d52b9f7d13bc5a17a01625]
	I1007 13:37:23.960909 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:23.964967 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:23.969067 1760312 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:37:23.969141 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:37:24.011001 1760312 cri.go:89] found id: "c04172c48f9d67d1f3a08064496ac572adc52eab6e18bf5c347b1ba681701baf"
	I1007 13:37:24.011029 1760312 cri.go:89] found id: "18a5dd2fe672b09013f2b9fed5dd79d02ec1a26b2545bd70e81c9ec3713bef2d"
	I1007 13:37:24.011036 1760312 cri.go:89] found id: ""
	I1007 13:37:24.011044 1760312 logs.go:282] 2 containers: [c04172c48f9d67d1f3a08064496ac572adc52eab6e18bf5c347b1ba681701baf 18a5dd2fe672b09013f2b9fed5dd79d02ec1a26b2545bd70e81c9ec3713bef2d]
	I1007 13:37:24.011139 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:24.016807 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:24.021054 1760312 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:37:24.021152 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:37:24.063408 1760312 cri.go:89] found id: ""
	I1007 13:37:24.063431 1760312 logs.go:282] 0 containers: []
	W1007 13:37:24.063440 1760312 logs.go:284] No container was found matching "coredns"
	I1007 13:37:24.063471 1760312 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:37:24.063580 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:37:24.105064 1760312 cri.go:89] found id: "2466a07d87bfdd5c36327bb7160a98d96585acb8b586b9d8f300d0080739e122"
	I1007 13:37:24.105089 1760312 cri.go:89] found id: "1e026a69d9307f214aba33fe7fc46446e93ecebc8c44fe9daa5210a14ecc1dee"
	I1007 13:37:24.105094 1760312 cri.go:89] found id: ""
	I1007 13:37:24.105101 1760312 logs.go:282] 2 containers: [2466a07d87bfdd5c36327bb7160a98d96585acb8b586b9d8f300d0080739e122 1e026a69d9307f214aba33fe7fc46446e93ecebc8c44fe9daa5210a14ecc1dee]
	I1007 13:37:24.105212 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:24.109179 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:24.112527 1760312 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:37:24.112600 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:37:24.152730 1760312 cri.go:89] found id: "aace5952020afc8f3e4244e29dadf8fbcab1d115d70838ecce5fa63eed6f7c59"
	I1007 13:37:24.152757 1760312 cri.go:89] found id: ""
	I1007 13:37:24.152765 1760312 logs.go:282] 1 containers: [aace5952020afc8f3e4244e29dadf8fbcab1d115d70838ecce5fa63eed6f7c59]
	I1007 13:37:24.152824 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:24.156329 1760312 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:37:24.156421 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:37:24.194337 1760312 cri.go:89] found id: "38ceffbcfed076af448bc5e2611547cc18e11104c67b228afbc2dee1c0bc2b9a"
	I1007 13:37:24.194365 1760312 cri.go:89] found id: "f7aa91e2995aa8dde7ecf57a2b2283f4ea2624262e6f83bc670d67e08441b315"
	I1007 13:37:24.194370 1760312 cri.go:89] found id: ""
	I1007 13:37:24.194377 1760312 logs.go:282] 2 containers: [38ceffbcfed076af448bc5e2611547cc18e11104c67b228afbc2dee1c0bc2b9a f7aa91e2995aa8dde7ecf57a2b2283f4ea2624262e6f83bc670d67e08441b315]
	I1007 13:37:24.194493 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:24.198691 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:24.202169 1760312 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:37:24.202237 1760312 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:37:24.240467 1760312 cri.go:89] found id: "23d16beac640b9def661d0bc58ee76936a6953a8db6100ce34bdc1ab29d605fa"
	I1007 13:37:24.240487 1760312 cri.go:89] found id: ""
	I1007 13:37:24.240495 1760312 logs.go:282] 1 containers: [23d16beac640b9def661d0bc58ee76936a6953a8db6100ce34bdc1ab29d605fa]
	I1007 13:37:24.240548 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:24.244095 1760312 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:37:24.244121 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 13:37:24.508991 1760312 logs.go:123] Gathering logs for etcd [18a5dd2fe672b09013f2b9fed5dd79d02ec1a26b2545bd70e81c9ec3713bef2d] ...
	I1007 13:37:24.509027 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18a5dd2fe672b09013f2b9fed5dd79d02ec1a26b2545bd70e81c9ec3713bef2d"
	I1007 13:37:24.577128 1760312 logs.go:123] Gathering logs for kube-scheduler [2466a07d87bfdd5c36327bb7160a98d96585acb8b586b9d8f300d0080739e122] ...
	I1007 13:37:24.577167 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2466a07d87bfdd5c36327bb7160a98d96585acb8b586b9d8f300d0080739e122"
	I1007 13:37:24.636167 1760312 logs.go:123] Gathering logs for kube-scheduler [1e026a69d9307f214aba33fe7fc46446e93ecebc8c44fe9daa5210a14ecc1dee] ...
	I1007 13:37:24.636203 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e026a69d9307f214aba33fe7fc46446e93ecebc8c44fe9daa5210a14ecc1dee"
	I1007 13:37:24.676107 1760312 logs.go:123] Gathering logs for kube-controller-manager [f7aa91e2995aa8dde7ecf57a2b2283f4ea2624262e6f83bc670d67e08441b315] ...
	I1007 13:37:24.676174 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7aa91e2995aa8dde7ecf57a2b2283f4ea2624262e6f83bc670d67e08441b315"
	I1007 13:37:24.713943 1760312 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:37:24.713972 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 13:37:24.786350 1760312 logs.go:123] Gathering logs for kube-apiserver [e8000ec26459b7aad5ed8b44f2048bb47b5c2e5646163694630298cc2300e8a0] ...
	I1007 13:37:24.786391 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8000ec26459b7aad5ed8b44f2048bb47b5c2e5646163694630298cc2300e8a0"
	I1007 13:37:24.844988 1760312 logs.go:123] Gathering logs for etcd [c04172c48f9d67d1f3a08064496ac572adc52eab6e18bf5c347b1ba681701baf] ...
	I1007 13:37:24.845022 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c04172c48f9d67d1f3a08064496ac572adc52eab6e18bf5c347b1ba681701baf"
	I1007 13:37:24.896680 1760312 logs.go:123] Gathering logs for kube-controller-manager [38ceffbcfed076af448bc5e2611547cc18e11104c67b228afbc2dee1c0bc2b9a] ...
	I1007 13:37:24.896736 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38ceffbcfed076af448bc5e2611547cc18e11104c67b228afbc2dee1c0bc2b9a"
	I1007 13:37:24.969870 1760312 logs.go:123] Gathering logs for container status ...
	I1007 13:37:24.969906 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:37:25.023981 1760312 logs.go:123] Gathering logs for kubelet ...
	I1007 13:37:25.024243 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:37:25.130082 1760312 logs.go:123] Gathering logs for dmesg ...
	I1007 13:37:25.130123 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:37:25.147149 1760312 logs.go:123] Gathering logs for kindnet [23d16beac640b9def661d0bc58ee76936a6953a8db6100ce34bdc1ab29d605fa] ...
	I1007 13:37:25.147250 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23d16beac640b9def661d0bc58ee76936a6953a8db6100ce34bdc1ab29d605fa"
	I1007 13:37:25.190669 1760312 logs.go:123] Gathering logs for kube-apiserver [3b0d066c45a241f7b2d17d7d70677c6239f78af5c5d52b9f7d13bc5a17a01625] ...
	I1007 13:37:25.190696 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b0d066c45a241f7b2d17d7d70677c6239f78af5c5d52b9f7d13bc5a17a01625"
	I1007 13:37:25.232119 1760312 logs.go:123] Gathering logs for kube-proxy [aace5952020afc8f3e4244e29dadf8fbcab1d115d70838ecce5fa63eed6f7c59] ...
	I1007 13:37:25.232148 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aace5952020afc8f3e4244e29dadf8fbcab1d115d70838ecce5fa63eed6f7c59"
	I1007 13:37:27.786336 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1007 13:37:27.786406 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:27.786438 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:27.786457 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:27.794623 1760312 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 13:37:27.812782 1760312 system_pods.go:59] 19 kube-system pods found
	I1007 13:37:27.812867 1760312 system_pods.go:61] "coredns-7c65d6cfc9-kjxj5" [8070db2c-e321-4c87-8547-58e4beea2997] Running
	I1007 13:37:27.812889 1760312 system_pods.go:61] "coredns-7c65d6cfc9-v7rpb" [0d53d549-d272-496d-9a2b-103079a895dc] Running
	I1007 13:37:27.812968 1760312 system_pods.go:61] "etcd-ha-362969" [5e7f3dfb-d1fd-4452-a02f-5ec06fcba72e] Running
	I1007 13:37:27.812992 1760312 system_pods.go:61] "etcd-ha-362969-m02" [cc660247-5651-4221-bff9-3a290b575922] Running
	I1007 13:37:27.813012 1760312 system_pods.go:61] "kindnet-2pfgm" [96a3f4da-fbc8-43d4-b3f4-16c74cd69f12] Running
	I1007 13:37:27.813041 1760312 system_pods.go:61] "kindnet-4rw9w" [3776e8ed-6aca-437e-993e-1694d14b89e9] Running
	I1007 13:37:27.813066 1760312 system_pods.go:61] "kindnet-xc9st" [df6ba527-3db6-45ff-8a10-c3686fbb6f5a] Running
	I1007 13:37:27.813090 1760312 system_pods.go:61] "kube-apiserver-ha-362969" [629d8ce4-d717-4473-b031-0d5f88808a66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1007 13:37:27.813111 1760312 system_pods.go:61] "kube-apiserver-ha-362969-m02" [fabcb89b-63fc-4fee-89bb-544928e0a2d3] Running
	I1007 13:37:27.813134 1760312 system_pods.go:61] "kube-controller-manager-ha-362969" [dde58e65-918b-454a-a87a-0dafa82cdae5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1007 13:37:27.813157 1760312 system_pods.go:61] "kube-controller-manager-ha-362969-m02" [94bb6b49-a19c-4580-bb7c-22239a0e3cac] Running
	I1007 13:37:27.813177 1760312 system_pods.go:61] "kube-proxy-jwdpx" [4f2a7857-b193-4a82-9855-0df2c19be05a] Running
	I1007 13:37:27.813195 1760312 system_pods.go:61] "kube-proxy-qxlrd" [673a3e5a-389c-4f3f-8459-2b4877db9fcf] Running
	I1007 13:37:27.813213 1760312 system_pods.go:61] "kube-proxy-vxzkt" [99749a45-25b7-498f-9a54-0987cfa4fd9f] Running
	I1007 13:37:27.813228 1760312 system_pods.go:61] "kube-scheduler-ha-362969" [95875fe3-2c61-4acb-b123-91b7f1713b0b] Running
	I1007 13:37:27.813254 1760312 system_pods.go:61] "kube-scheduler-ha-362969-m02" [53985ef5-56a9-4f03-8287-6350fcba7a05] Running
	I1007 13:37:27.813272 1760312 system_pods.go:61] "kube-vip-ha-362969" [2fa91b29-6426-4c76-8ce7-be40d6d766fc] Running
	I1007 13:37:27.813289 1760312 system_pods.go:61] "kube-vip-ha-362969-m02" [a1ef7f88-278e-4f00-a2b8-e09182342288] Running
	I1007 13:37:27.813305 1760312 system_pods.go:61] "storage-provisioner" [1c4cf6c5-0a18-4569-9d54-1d33b756ffe8] Running
	I1007 13:37:27.813324 1760312 system_pods.go:74] duration metric: took 3.889977639s to wait for pod list to return data ...
	I1007 13:37:27.813350 1760312 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:37:27.813460 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1007 13:37:27.813488 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:27.813511 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:27.813555 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:27.820163 1760312 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 13:37:27.820610 1760312 default_sa.go:45] found service account: "default"
	I1007 13:37:27.820632 1760312 default_sa.go:55] duration metric: took 7.262439ms for default service account to be created ...
	I1007 13:37:27.820642 1760312 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:37:27.820704 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1007 13:37:27.820709 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:27.820717 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:27.820721 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:27.826003 1760312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 13:37:27.835156 1760312 system_pods.go:86] 19 kube-system pods found
	I1007 13:37:27.835237 1760312 system_pods.go:89] "coredns-7c65d6cfc9-kjxj5" [8070db2c-e321-4c87-8547-58e4beea2997] Running
	I1007 13:37:27.835261 1760312 system_pods.go:89] "coredns-7c65d6cfc9-v7rpb" [0d53d549-d272-496d-9a2b-103079a895dc] Running
	I1007 13:37:27.835281 1760312 system_pods.go:89] "etcd-ha-362969" [5e7f3dfb-d1fd-4452-a02f-5ec06fcba72e] Running
	I1007 13:37:27.835305 1760312 system_pods.go:89] "etcd-ha-362969-m02" [cc660247-5651-4221-bff9-3a290b575922] Running
	I1007 13:37:27.835324 1760312 system_pods.go:89] "kindnet-2pfgm" [96a3f4da-fbc8-43d4-b3f4-16c74cd69f12] Running
	I1007 13:37:27.835343 1760312 system_pods.go:89] "kindnet-4rw9w" [3776e8ed-6aca-437e-993e-1694d14b89e9] Running
	I1007 13:37:27.835360 1760312 system_pods.go:89] "kindnet-xc9st" [df6ba527-3db6-45ff-8a10-c3686fbb6f5a] Running
	I1007 13:37:27.835393 1760312 system_pods.go:89] "kube-apiserver-ha-362969" [629d8ce4-d717-4473-b031-0d5f88808a66] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1007 13:37:27.835416 1760312 system_pods.go:89] "kube-apiserver-ha-362969-m02" [fabcb89b-63fc-4fee-89bb-544928e0a2d3] Running
	I1007 13:37:27.835440 1760312 system_pods.go:89] "kube-controller-manager-ha-362969" [dde58e65-918b-454a-a87a-0dafa82cdae5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1007 13:37:27.835458 1760312 system_pods.go:89] "kube-controller-manager-ha-362969-m02" [94bb6b49-a19c-4580-bb7c-22239a0e3cac] Running
	I1007 13:37:27.835485 1760312 system_pods.go:89] "kube-proxy-jwdpx" [4f2a7857-b193-4a82-9855-0df2c19be05a] Running
	I1007 13:37:27.835503 1760312 system_pods.go:89] "kube-proxy-qxlrd" [673a3e5a-389c-4f3f-8459-2b4877db9fcf] Running
	I1007 13:37:27.835521 1760312 system_pods.go:89] "kube-proxy-vxzkt" [99749a45-25b7-498f-9a54-0987cfa4fd9f] Running
	I1007 13:37:27.835558 1760312 system_pods.go:89] "kube-scheduler-ha-362969" [95875fe3-2c61-4acb-b123-91b7f1713b0b] Running
	I1007 13:37:27.835578 1760312 system_pods.go:89] "kube-scheduler-ha-362969-m02" [53985ef5-56a9-4f03-8287-6350fcba7a05] Running
	I1007 13:37:27.835596 1760312 system_pods.go:89] "kube-vip-ha-362969" [2fa91b29-6426-4c76-8ce7-be40d6d766fc] Running
	I1007 13:37:27.835612 1760312 system_pods.go:89] "kube-vip-ha-362969-m02" [a1ef7f88-278e-4f00-a2b8-e09182342288] Running
	I1007 13:37:27.835630 1760312 system_pods.go:89] "storage-provisioner" [1c4cf6c5-0a18-4569-9d54-1d33b756ffe8] Running
	I1007 13:37:27.835660 1760312 system_pods.go:126] duration metric: took 15.011764ms to wait for k8s-apps to be running ...
	I1007 13:37:27.835681 1760312 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:37:27.835766 1760312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:37:27.857746 1760312 system_svc.go:56] duration metric: took 22.056019ms WaitForService to wait for kubelet
	I1007 13:37:27.857787 1760312 kubeadm.go:582] duration metric: took 1m13.617788628s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:37:27.857851 1760312 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:37:27.857969 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I1007 13:37:27.857985 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:27.858007 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:27.858027 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:27.863673 1760312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 13:37:27.865056 1760312 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 13:37:27.865099 1760312 node_conditions.go:123] node cpu capacity is 2
	I1007 13:37:27.865150 1760312 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 13:37:27.865157 1760312 node_conditions.go:123] node cpu capacity is 2
	I1007 13:37:27.865168 1760312 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 13:37:27.865173 1760312 node_conditions.go:123] node cpu capacity is 2
	I1007 13:37:27.865179 1760312 node_conditions.go:105] duration metric: took 7.31605ms to run NodePressure ...
	I1007 13:37:27.865195 1760312 start.go:241] waiting for startup goroutines ...
	I1007 13:37:27.865456 1760312 start.go:255] writing updated cluster config ...
	I1007 13:37:27.868837 1760312 out.go:201] 
	I1007 13:37:27.871984 1760312 config.go:182] Loaded profile config "ha-362969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:37:27.872179 1760312 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/config.json ...
	I1007 13:37:27.875372 1760312 out.go:177] * Starting "ha-362969-m04" worker node in "ha-362969" cluster
	I1007 13:37:27.878962 1760312 cache.go:121] Beginning downloading kic base image for docker with crio
	I1007 13:37:27.881624 1760312 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 13:37:27.884204 1760312 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:37:27.884253 1760312 cache.go:56] Caching tarball of preloaded images
	I1007 13:37:27.884281 1760312 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 13:37:27.884406 1760312 preload.go:172] Found /home/jenkins/minikube-integration/18424-1688750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1007 13:37:27.884429 1760312 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 13:37:27.884597 1760312 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/config.json ...
	I1007 13:37:27.907910 1760312 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1007 13:37:27.907934 1760312 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1007 13:37:27.907948 1760312 cache.go:194] Successfully downloaded all kic artifacts
	I1007 13:37:27.907972 1760312 start.go:360] acquireMachinesLock for ha-362969-m04: {Name:mk4a977f5533d32714d6af7d8fd52ee6d3a3b479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:37:27.908035 1760312 start.go:364] duration metric: took 45.931µs to acquireMachinesLock for "ha-362969-m04"
	I1007 13:37:27.908065 1760312 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:37:27.908076 1760312 fix.go:54] fixHost starting: m04
	I1007 13:37:27.908355 1760312 cli_runner.go:164] Run: docker container inspect ha-362969-m04 --format={{.State.Status}}
	I1007 13:37:27.938711 1760312 fix.go:112] recreateIfNeeded on ha-362969-m04: state=Stopped err=<nil>
	W1007 13:37:27.938740 1760312 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:37:27.941778 1760312 out.go:177] * Restarting existing docker container for "ha-362969-m04" ...
	I1007 13:37:27.944430 1760312 cli_runner.go:164] Run: docker start ha-362969-m04
	I1007 13:37:28.338106 1760312 cli_runner.go:164] Run: docker container inspect ha-362969-m04 --format={{.State.Status}}
	I1007 13:37:28.368007 1760312 kic.go:430] container "ha-362969-m04" state is running.
	I1007 13:37:28.368384 1760312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-362969-m04
	I1007 13:37:28.396603 1760312 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/config.json ...
	I1007 13:37:28.396868 1760312 machine.go:93] provisionDockerMachine start ...
	I1007 13:37:28.396938 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969-m04
	I1007 13:37:28.427915 1760312 main.go:141] libmachine: Using SSH client type: native
	I1007 13:37:28.428155 1760312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38336 <nil> <nil>}
	I1007 13:37:28.428175 1760312 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:37:28.428802 1760312 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1007 13:37:31.563173 1760312 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-362969-m04
	
	I1007 13:37:31.563199 1760312 ubuntu.go:169] provisioning hostname "ha-362969-m04"
	I1007 13:37:31.563266 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969-m04
	I1007 13:37:31.580451 1760312 main.go:141] libmachine: Using SSH client type: native
	I1007 13:37:31.580693 1760312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38336 <nil> <nil>}
	I1007 13:37:31.580710 1760312 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-362969-m04 && echo "ha-362969-m04" | sudo tee /etc/hostname
	I1007 13:37:31.728841 1760312 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-362969-m04
	
	I1007 13:37:31.728927 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969-m04
	I1007 13:37:31.746398 1760312 main.go:141] libmachine: Using SSH client type: native
	I1007 13:37:31.746642 1760312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38336 <nil> <nil>}
	I1007 13:37:31.746668 1760312 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-362969-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-362969-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-362969-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:37:31.883782 1760312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:37:31.883849 1760312 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18424-1688750/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-1688750/.minikube}
	I1007 13:37:31.883889 1760312 ubuntu.go:177] setting up certificates
	I1007 13:37:31.883924 1760312 provision.go:84] configureAuth start
	I1007 13:37:31.884038 1760312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-362969-m04
	I1007 13:37:31.903376 1760312 provision.go:143] copyHostCerts
	I1007 13:37:31.903418 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.pem
	I1007 13:37:31.903455 1760312 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.pem, removing ...
	I1007 13:37:31.903462 1760312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.pem
	I1007 13:37:31.903566 1760312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.pem (1078 bytes)
	I1007 13:37:31.903646 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18424-1688750/.minikube/cert.pem
	I1007 13:37:31.903674 1760312 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-1688750/.minikube/cert.pem, removing ...
	I1007 13:37:31.903679 1760312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-1688750/.minikube/cert.pem
	I1007 13:37:31.903709 1760312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-1688750/.minikube/cert.pem (1123 bytes)
	I1007 13:37:31.903752 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18424-1688750/.minikube/key.pem
	I1007 13:37:31.903775 1760312 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-1688750/.minikube/key.pem, removing ...
	I1007 13:37:31.903783 1760312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-1688750/.minikube/key.pem
	I1007 13:37:31.903815 1760312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-1688750/.minikube/key.pem (1679 bytes)
	I1007 13:37:31.903868 1760312 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca-key.pem org=jenkins.ha-362969-m04 san=[127.0.0.1 192.168.49.5 ha-362969-m04 localhost minikube]
	I1007 13:37:32.899044 1760312 provision.go:177] copyRemoteCerts
	I1007 13:37:32.899136 1760312 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:37:32.899204 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969-m04
	I1007 13:37:32.916830 1760312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38336 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/ha-362969-m04/id_rsa Username:docker}
	I1007 13:37:33.018700 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 13:37:33.018767 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1007 13:37:33.049486 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 13:37:33.049607 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 13:37:33.076487 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 13:37:33.076579 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 13:37:33.104115 1760312 provision.go:87] duration metric: took 1.220158326s to configureAuth
	I1007 13:37:33.104642 1760312 ubuntu.go:193] setting minikube options for container-runtime
	I1007 13:37:33.104877 1760312 config.go:182] Loaded profile config "ha-362969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:37:33.104988 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969-m04
	I1007 13:37:33.122638 1760312 main.go:141] libmachine: Using SSH client type: native
	I1007 13:37:33.122894 1760312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38336 <nil> <nil>}
	I1007 13:37:33.122916 1760312 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:37:33.409209 1760312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:37:33.409286 1760312 machine.go:96] duration metric: took 5.012399174s to provisionDockerMachine
	I1007 13:37:33.409314 1760312 start.go:293] postStartSetup for "ha-362969-m04" (driver="docker")
	I1007 13:37:33.409340 1760312 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:37:33.409474 1760312 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:37:33.409561 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969-m04
	I1007 13:37:33.427718 1760312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38336 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/ha-362969-m04/id_rsa Username:docker}
	I1007 13:37:33.534274 1760312 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:37:33.538861 1760312 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 13:37:33.538900 1760312 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 13:37:33.538911 1760312 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 13:37:33.538918 1760312 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 13:37:33.538928 1760312 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-1688750/.minikube/addons for local assets ...
	I1007 13:37:33.538985 1760312 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-1688750/.minikube/files for local assets ...
	I1007 13:37:33.539062 1760312 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-1688750/.minikube/files/etc/ssl/certs/16941262.pem -> 16941262.pem in /etc/ssl/certs
	I1007 13:37:33.539073 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/files/etc/ssl/certs/16941262.pem -> /etc/ssl/certs/16941262.pem
	I1007 13:37:33.539176 1760312 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:37:33.551056 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/files/etc/ssl/certs/16941262.pem --> /etc/ssl/certs/16941262.pem (1708 bytes)
	I1007 13:37:33.583688 1760312 start.go:296] duration metric: took 174.345198ms for postStartSetup
	I1007 13:37:33.583771 1760312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 13:37:33.583818 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969-m04
	I1007 13:37:33.601608 1760312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38336 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/ha-362969-m04/id_rsa Username:docker}
	I1007 13:37:33.696424 1760312 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 13:37:33.701382 1760312 fix.go:56] duration metric: took 5.793290697s for fixHost
	I1007 13:37:33.701446 1760312 start.go:83] releasing machines lock for "ha-362969-m04", held for 5.793396777s
	I1007 13:37:33.701528 1760312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-362969-m04
	I1007 13:37:33.720562 1760312 out.go:177] * Found network options:
	I1007 13:37:33.723314 1760312 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1007 13:37:33.726004 1760312 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 13:37:33.726033 1760312 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 13:37:33.726057 1760312 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 13:37:33.726080 1760312 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 13:37:33.726149 1760312 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:37:33.726205 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969-m04
	I1007 13:37:33.726507 1760312 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:37:33.726567 1760312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969-m04
	I1007 13:37:33.745348 1760312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38336 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/ha-362969-m04/id_rsa Username:docker}
	I1007 13:37:33.747600 1760312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38336 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/ha-362969-m04/id_rsa Username:docker}
	I1007 13:37:34.012605 1760312 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 13:37:34.017516 1760312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:37:34.027251 1760312 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1007 13:37:34.027333 1760312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:37:34.036990 1760312 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 13:37:34.037017 1760312 start.go:495] detecting cgroup driver to use...
	I1007 13:37:34.037049 1760312 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 13:37:34.037112 1760312 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:37:34.049546 1760312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:37:34.061428 1760312 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:37:34.061539 1760312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:37:34.075474 1760312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:37:34.088967 1760312 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:37:34.182139 1760312 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:37:34.281169 1760312 docker.go:233] disabling docker service ...
	I1007 13:37:34.281250 1760312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:37:34.303157 1760312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:37:34.315324 1760312 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:37:34.418171 1760312 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:37:34.522278 1760312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:37:34.536150 1760312 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:37:34.561534 1760312 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:37:34.561609 1760312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:37:34.571671 1760312 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:37:34.571756 1760312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:37:34.584000 1760312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:37:34.595953 1760312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:37:34.610276 1760312 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:37:34.620547 1760312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:37:34.632360 1760312 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:37:34.646059 1760312 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:37:34.656449 1760312 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:37:34.664948 1760312 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:37:34.673920 1760312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:37:34.783088 1760312 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:37:34.918691 1760312 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:37:34.918786 1760312 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:37:34.923397 1760312 start.go:563] Will wait 60s for crictl version
	I1007 13:37:34.923511 1760312 ssh_runner.go:195] Run: which crictl
	I1007 13:37:34.929707 1760312 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:37:34.973168 1760312 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1007 13:37:34.973288 1760312 ssh_runner.go:195] Run: crio --version
	I1007 13:37:35.013144 1760312 ssh_runner.go:195] Run: crio --version
	I1007 13:37:35.104787 1760312 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1007 13:37:35.107505 1760312 out.go:177]   - env NO_PROXY=192.168.49.2
	I1007 13:37:35.110522 1760312 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1007 13:37:35.113302 1760312 cli_runner.go:164] Run: docker network inspect ha-362969 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 13:37:35.128406 1760312 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1007 13:37:35.132544 1760312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:37:35.146546 1760312 mustload.go:65] Loading cluster: ha-362969
	I1007 13:37:35.146815 1760312 config.go:182] Loaded profile config "ha-362969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:37:35.147069 1760312 cli_runner.go:164] Run: docker container inspect ha-362969 --format={{.State.Status}}
	I1007 13:37:35.167054 1760312 host.go:66] Checking if "ha-362969" exists ...
	I1007 13:37:35.167337 1760312 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969 for IP: 192.168.49.5
	I1007 13:37:35.167345 1760312 certs.go:194] generating shared ca certs ...
	I1007 13:37:35.167358 1760312 certs.go:226] acquiring lock for ca certs: {Name:mk3a082a64706c071bb4db632f3ec05c7c14e01c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:37:35.167693 1760312 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.key
	I1007 13:37:35.167778 1760312 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.key
	I1007 13:37:35.167796 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 13:37:35.167816 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 13:37:35.167832 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 13:37:35.167846 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 13:37:35.167913 1760312 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/1694126.pem (1338 bytes)
	W1007 13:37:35.167961 1760312 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/1694126_empty.pem, impossibly tiny 0 bytes
	I1007 13:37:35.167978 1760312 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 13:37:35.168007 1760312 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/ca.pem (1078 bytes)
	I1007 13:37:35.168038 1760312 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:37:35.168067 1760312 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/key.pem (1679 bytes)
	I1007 13:37:35.168113 1760312 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-1688750/.minikube/files/etc/ssl/certs/16941262.pem (1708 bytes)
	I1007 13:37:35.168146 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:37:35.168161 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/1694126.pem -> /usr/share/ca-certificates/1694126.pem
	I1007 13:37:35.168174 1760312 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18424-1688750/.minikube/files/etc/ssl/certs/16941262.pem -> /usr/share/ca-certificates/16941262.pem
	I1007 13:37:35.168196 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:37:35.194105 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:37:35.224530 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:37:35.253394 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 13:37:35.279024 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:37:35.311519 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/certs/1694126.pem --> /usr/share/ca-certificates/1694126.pem (1338 bytes)
	I1007 13:37:35.339299 1760312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-1688750/.minikube/files/etc/ssl/certs/16941262.pem --> /usr/share/ca-certificates/16941262.pem (1708 bytes)
	I1007 13:37:35.376772 1760312 ssh_runner.go:195] Run: openssl version
	I1007 13:37:35.382305 1760312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:37:35.392206 1760312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:37:35.395864 1760312 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 13:04 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:37:35.395982 1760312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:37:35.403094 1760312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:37:35.414157 1760312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1694126.pem && ln -fs /usr/share/ca-certificates/1694126.pem /etc/ssl/certs/1694126.pem"
	I1007 13:37:35.425318 1760312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1694126.pem
	I1007 13:37:35.429152 1760312 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 13:22 /usr/share/ca-certificates/1694126.pem
	I1007 13:37:35.429217 1760312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1694126.pem
	I1007 13:37:35.436838 1760312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1694126.pem /etc/ssl/certs/51391683.0"
	I1007 13:37:35.445771 1760312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16941262.pem && ln -fs /usr/share/ca-certificates/16941262.pem /etc/ssl/certs/16941262.pem"
	I1007 13:37:35.455517 1760312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16941262.pem
	I1007 13:37:35.459742 1760312 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 13:22 /usr/share/ca-certificates/16941262.pem
	I1007 13:37:35.459812 1760312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16941262.pem
	I1007 13:37:35.468209 1760312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16941262.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:37:35.477482 1760312 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:37:35.481095 1760312 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 13:37:35.481143 1760312 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.1  false true} ...
	I1007 13:37:35.481230 1760312 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-362969-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-362969 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:37:35.481294 1760312 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:37:35.489776 1760312 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:37:35.489885 1760312 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1007 13:37:35.504928 1760312 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1007 13:37:35.523685 1760312 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:37:35.543451 1760312 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1007 13:37:35.547013 1760312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:37:35.559103 1760312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:37:35.651250 1760312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:37:35.664004 1760312 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1007 13:37:35.664343 1760312 config.go:182] Loaded profile config "ha-362969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:37:35.667434 1760312 out.go:177] * Verifying Kubernetes components...
	I1007 13:37:35.670119 1760312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:37:35.764055 1760312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:37:35.782385 1760312 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18424-1688750/kubeconfig
	I1007 13:37:35.782762 1760312 kapi.go:59] client config for ha-362969: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/client.crt", KeyFile:"/home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/ha-362969/client.key", CAFile:"/home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e94a20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 13:37:35.782861 1760312 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1007 13:37:35.783131 1760312 node_ready.go:35] waiting up to 6m0s for node "ha-362969-m04" to be "Ready" ...
	I1007 13:37:35.783236 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m04
	I1007 13:37:35.783259 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:35.783289 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:35.783309 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:35.786667 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:37:36.284088 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m04
	I1007 13:37:36.284111 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:36.284121 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:36.284125 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:36.287036 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:36.784022 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m04
	I1007 13:37:36.784046 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:36.784057 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:36.784062 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:36.786917 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:37.283348 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m04
	I1007 13:37:37.283372 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:37.283383 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:37.283387 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:37.286271 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:37.783685 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m04
	I1007 13:37:37.783709 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:37.783719 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:37.783724 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:37.786409 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:37.786992 1760312 node_ready.go:53] node "ha-362969-m04" has status "Ready":"Unknown"
	I1007 13:37:38.284327 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m04
	I1007 13:37:38.284349 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:38.284359 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:38.284363 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:38.287246 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:38.783879 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m04
	I1007 13:37:38.783905 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:38.783915 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:38.783920 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:38.786595 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:39.283403 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m04
	I1007 13:37:39.283436 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:39.283445 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:39.283451 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:39.286462 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:39.784278 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m04
	I1007 13:37:39.784303 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:39.784311 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:39.784317 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:39.787390 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:37:39.788250 1760312 node_ready.go:53] node "ha-362969-m04" has status "Ready":"Unknown"
	I1007 13:37:40.283442 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m04
	I1007 13:37:40.283464 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:40.283474 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:40.283480 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:40.286577 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:37:40.783983 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m04
	I1007 13:37:40.784009 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:40.784019 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:40.784023 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:40.786664 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:41.284202 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m04
	I1007 13:37:41.284225 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:41.284236 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:41.284241 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:41.287014 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:41.783761 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m04
	I1007 13:37:41.783785 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:41.783795 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:41.783800 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:41.786413 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:42.284091 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m04
	I1007 13:37:42.284118 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:42.284128 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:42.284131 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:42.287188 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:37:42.288046 1760312 node_ready.go:53] node "ha-362969-m04" has status "Ready":"Unknown"
	I1007 13:37:42.783678 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m04
	I1007 13:37:42.783723 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:42.783733 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:42.783739 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:42.786414 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:42.787016 1760312 node_ready.go:49] node "ha-362969-m04" has status "Ready":"True"
	I1007 13:37:42.787029 1760312 node_ready.go:38] duration metric: took 7.003861242s for node "ha-362969-m04" to be "Ready" ...
	I1007 13:37:42.787038 1760312 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:37:42.787103 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1007 13:37:42.787109 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:42.787117 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:42.787121 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:42.792262 1760312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 13:37:42.800223 1760312 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kjxj5" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:42.800390 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kjxj5
	I1007 13:37:42.800404 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:42.800414 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:42.800425 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:42.804077 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:37:42.805183 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:42.805200 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:42.805208 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:42.805215 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:42.808226 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:42.808705 1760312 pod_ready.go:93] pod "coredns-7c65d6cfc9-kjxj5" in "kube-system" namespace has status "Ready":"True"
	I1007 13:37:42.808725 1760312 pod_ready.go:82] duration metric: took 8.418098ms for pod "coredns-7c65d6cfc9-kjxj5" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:42.808736 1760312 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v7rpb" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:42.808799 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-v7rpb
	I1007 13:37:42.808810 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:42.808817 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:42.808821 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:42.812745 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:37:42.813620 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:42.813647 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:42.813657 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:42.813661 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:42.816182 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:42.816833 1760312 pod_ready.go:93] pod "coredns-7c65d6cfc9-v7rpb" in "kube-system" namespace has status "Ready":"True"
	I1007 13:37:42.816856 1760312 pod_ready.go:82] duration metric: took 8.110965ms for pod "coredns-7c65d6cfc9-v7rpb" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:42.816892 1760312 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-362969" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:42.816975 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-362969
	I1007 13:37:42.816984 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:42.816992 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:42.816996 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:42.820151 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:37:42.821036 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:42.821053 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:42.821062 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:42.821066 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:42.823515 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:42.824241 1760312 pod_ready.go:93] pod "etcd-ha-362969" in "kube-system" namespace has status "Ready":"True"
	I1007 13:37:42.824262 1760312 pod_ready.go:82] duration metric: took 7.35769ms for pod "etcd-ha-362969" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:42.824274 1760312 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-362969-m02" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:42.824341 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-362969-m02
	I1007 13:37:42.824353 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:42.824361 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:42.824366 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:42.826987 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:42.827780 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m02
	I1007 13:37:42.827798 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:42.827807 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:42.827811 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:42.830275 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:42.830859 1760312 pod_ready.go:93] pod "etcd-ha-362969-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 13:37:42.830878 1760312 pod_ready.go:82] duration metric: took 6.595965ms for pod "etcd-ha-362969-m02" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:42.830913 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-362969" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:42.984352 1760312 request.go:632] Waited for 153.345831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:42.984417 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:42.984424 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:42.984438 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:42.984448 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:42.987382 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:43.184435 1760312 request.go:632] Waited for 196.249789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:43.184522 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:43.184549 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:43.184560 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:43.184564 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:43.187839 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:37:43.383741 1760312 request.go:632] Waited for 52.315428ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:43.383817 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:43.383827 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:43.383837 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:43.383846 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:43.386771 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:43.584028 1760312 request.go:632] Waited for 196.331123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:43.584092 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:43.584103 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:43.584113 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:43.584119 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:43.587815 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:37:43.831189 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:43.831212 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:43.831221 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:43.831225 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:43.834130 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:43.984096 1760312 request.go:632] Waited for 149.273337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:43.984182 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:43.984194 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:43.984204 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:43.984208 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:43.986939 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:44.331150 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:44.331173 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:44.331182 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:44.331187 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:44.334294 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:37:44.384426 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:44.384518 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:44.384533 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:44.384538 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:44.387242 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:44.831146 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:44.831168 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:44.831177 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:44.831183 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:44.834265 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:37:44.835209 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:44.835234 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:44.835244 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:44.835275 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:44.838314 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:37:44.838966 1760312 pod_ready.go:103] pod "kube-apiserver-ha-362969" in "kube-system" namespace has status "Ready":"False"
	I1007 13:37:45.331143 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:45.331169 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:45.331178 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:45.331183 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:45.334190 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:45.334942 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:45.334960 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:45.334969 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:45.334974 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:45.337474 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:45.831277 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:45.831342 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:45.831357 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:45.831363 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:45.834245 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:45.835306 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:45.835327 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:45.835337 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:45.835344 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:45.838085 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:46.332104 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:46.332128 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:46.332138 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:46.332143 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:46.334979 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:46.335832 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:46.335883 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:46.335899 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:46.335904 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:46.338567 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:46.831320 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:46.831343 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:46.831352 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:46.831359 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:46.834010 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:46.834918 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:46.834948 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:46.834957 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:46.834961 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:46.837384 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:47.331790 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:47.331818 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:47.331827 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:47.331832 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:47.335313 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:37:47.336347 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:47.336367 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:47.336377 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:47.336383 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:47.339097 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:47.339686 1760312 pod_ready.go:103] pod "kube-apiserver-ha-362969" in "kube-system" namespace has status "Ready":"False"
	I1007 13:37:47.831199 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:47.831220 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:47.831229 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:47.831236 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:47.833993 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:47.834808 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:47.834828 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:47.834838 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:47.834843 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:47.837341 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:48.331259 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:48.331284 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:48.331295 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:48.331302 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:48.334418 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:37:48.335261 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:48.335281 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:48.335302 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:48.335312 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:48.337878 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:48.831145 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:48.831171 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:48.831180 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:48.831184 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:48.834108 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:48.835248 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:48.835274 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:48.835289 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:48.835293 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:48.838301 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:49.331218 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:49.331247 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:49.331257 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:49.331263 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:49.334173 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:49.335141 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:49.335159 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:49.335176 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:49.335185 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:49.337905 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:49.831145 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:49.831167 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:49.831176 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:49.831180 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:49.834204 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:37:49.834997 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:49.835019 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:49.835029 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:49.835034 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:49.837645 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:49.838214 1760312 pod_ready.go:103] pod "kube-apiserver-ha-362969" in "kube-system" namespace has status "Ready":"False"
	I1007 13:37:50.331486 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:50.331508 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:50.331518 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:50.331525 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:50.334484 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:50.335194 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:50.335212 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:50.335221 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:50.335225 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:50.338153 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:50.832087 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:50.832110 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:50.832119 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:50.832132 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:50.835136 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:50.835895 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:50.835914 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:50.835924 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:50.835927 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:50.838637 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:51.332042 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:51.332066 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:51.332075 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:51.332080 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:51.334978 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:51.336052 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:51.336070 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:51.336082 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:51.336088 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:51.338673 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:51.831496 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:51.831587 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:51.831604 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:51.831609 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:51.834733 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:37:51.835498 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:51.835518 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:51.835573 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:51.835578 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:51.838011 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:51.839202 1760312 pod_ready.go:103] pod "kube-apiserver-ha-362969" in "kube-system" namespace has status "Ready":"False"
	I1007 13:37:52.331195 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:52.331213 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:52.331219 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:52.331222 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:52.334443 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:37:52.335329 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:52.335345 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:52.335354 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:52.335358 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:52.338185 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:52.832076 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:52.832100 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:52.832109 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:52.832115 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:52.834967 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:52.835747 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:52.835803 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:52.835829 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:52.835841 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:52.838118 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:53.331188 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:53.331211 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:53.331221 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:53.331227 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:53.334260 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:37:53.335406 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:53.335425 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:53.335435 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:53.335441 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:53.338002 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:53.831666 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:53.831689 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:53.831700 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:53.831705 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:53.834281 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:53.835339 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:53.835355 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:53.835363 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:53.835368 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:53.837825 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:54.331187 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:54.331211 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:54.331221 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:54.331227 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:54.334193 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:54.335085 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:54.335102 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:54.335112 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:54.335117 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:54.337433 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:54.338226 1760312 pod_ready.go:103] pod "kube-apiserver-ha-362969" in "kube-system" namespace has status "Ready":"False"
	I1007 13:37:54.831453 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:54.831474 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:54.831484 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:54.831487 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:54.834318 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:54.835132 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:54.835153 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:54.835164 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:54.835167 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:54.837624 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:55.332019 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:55.332044 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:55.332055 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:55.332060 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:55.335013 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:55.335971 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:55.335992 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:55.336002 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:55.336007 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:55.338579 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:55.831859 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:55.831882 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:55.831892 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:55.831897 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:55.834604 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:55.835646 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:55.835666 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:55.835675 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:55.835680 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:55.838144 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:56.331439 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:56.331463 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:56.331473 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:56.331477 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:56.334457 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:56.335394 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:56.335412 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:56.335421 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:56.335427 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:56.337759 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:56.338326 1760312 pod_ready.go:103] pod "kube-apiserver-ha-362969" in "kube-system" namespace has status "Ready":"False"
	I1007 13:37:56.831411 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969
	I1007 13:37:56.831432 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:56.831441 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:56.831445 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:56.838647 1760312 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 13:37:56.839311 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:56.839321 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:56.839330 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:56.839335 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:56.848889 1760312 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 13:37:56.849428 1760312 pod_ready.go:98] node "ha-362969" hosting pod "kube-apiserver-ha-362969" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-362969" has status "Ready":"Unknown"
	I1007 13:37:56.849443 1760312 pod_ready.go:82] duration metric: took 14.018515598s for pod "kube-apiserver-ha-362969" in "kube-system" namespace to be "Ready" ...
	E1007 13:37:56.849452 1760312 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-362969" hosting pod "kube-apiserver-ha-362969" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-362969" has status "Ready":"Unknown"
	I1007 13:37:56.849459 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-362969-m02" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:56.849521 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969-m02
	I1007 13:37:56.849542 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:56.849557 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:56.849564 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:56.868391 1760312 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1007 13:37:56.869196 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m02
	I1007 13:37:56.869235 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:56.869274 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:56.869297 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:56.893848 1760312 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I1007 13:37:56.894471 1760312 pod_ready.go:93] pod "kube-apiserver-ha-362969-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 13:37:56.894525 1760312 pod_ready.go:82] duration metric: took 45.057645ms for pod "kube-apiserver-ha-362969-m02" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:56.894551 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-362969" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:56.894645 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-362969
	I1007 13:37:56.894684 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:56.894708 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:56.894724 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:56.908326 1760312 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1007 13:37:56.909248 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:56.909292 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:56.909332 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:56.909354 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:56.917703 1760312 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 13:37:56.918345 1760312 pod_ready.go:98] node "ha-362969" hosting pod "kube-controller-manager-ha-362969" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-362969" has status "Ready":"Unknown"
	I1007 13:37:56.918420 1760312 pod_ready.go:82] duration metric: took 23.849227ms for pod "kube-controller-manager-ha-362969" in "kube-system" namespace to be "Ready" ...
	E1007 13:37:56.918447 1760312 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-362969" hosting pod "kube-controller-manager-ha-362969" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-362969" has status "Ready":"Unknown"
	I1007 13:37:56.918466 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-362969-m02" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:56.918564 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-362969-m02
	I1007 13:37:56.918597 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:56.918617 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:56.918634 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:56.928227 1760312 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 13:37:56.929194 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m02
	I1007 13:37:56.929249 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:56.929273 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:56.929291 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:56.934726 1760312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 13:37:56.935687 1760312 pod_ready.go:93] pod "kube-controller-manager-ha-362969-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 13:37:56.935743 1760312 pod_ready.go:82] duration metric: took 17.239625ms for pod "kube-controller-manager-ha-362969-m02" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:56.935770 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jwdpx" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:56.935871 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jwdpx
	I1007 13:37:56.935909 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:56.935931 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:56.935948 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:56.938663 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:56.939434 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m04
	I1007 13:37:56.939480 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:56.939505 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:56.939523 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:56.952544 1760312 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1007 13:37:56.953168 1760312 pod_ready.go:93] pod "kube-proxy-jwdpx" in "kube-system" namespace has status "Ready":"True"
	I1007 13:37:56.953210 1760312 pod_ready.go:82] duration metric: took 17.421035ms for pod "kube-proxy-jwdpx" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:56.953249 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qxlrd" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:57.031669 1760312 request.go:632] Waited for 78.336444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qxlrd
	I1007 13:37:57.031825 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qxlrd
	I1007 13:37:57.031878 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:57.031912 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:57.031944 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:57.056591 1760312 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I1007 13:37:57.232214 1760312 request.go:632] Waited for 172.207366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969-m02
	I1007 13:37:57.232290 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m02
	I1007 13:37:57.232341 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:57.232354 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:57.232359 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:57.237613 1760312 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 13:37:57.238299 1760312 pod_ready.go:93] pod "kube-proxy-qxlrd" in "kube-system" namespace has status "Ready":"True"
	I1007 13:37:57.238351 1760312 pod_ready.go:82] duration metric: took 285.077933ms for pod "kube-proxy-qxlrd" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:57.238379 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vxzkt" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:57.431902 1760312 request.go:632] Waited for 193.378362ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vxzkt
	I1007 13:37:57.431965 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vxzkt
	I1007 13:37:57.431973 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:57.431989 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:57.431998 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:57.436192 1760312 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 13:37:57.632182 1760312 request.go:632] Waited for 195.329476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:57.632291 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:57.632308 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:57.632317 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:57.632326 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:57.635308 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:57.635933 1760312 pod_ready.go:98] node "ha-362969" hosting pod "kube-proxy-vxzkt" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-362969" has status "Ready":"Unknown"
	I1007 13:37:57.635954 1760312 pod_ready.go:82] duration metric: took 397.543984ms for pod "kube-proxy-vxzkt" in "kube-system" namespace to be "Ready" ...
	E1007 13:37:57.635964 1760312 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-362969" hosting pod "kube-proxy-vxzkt" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-362969" has status "Ready":"Unknown"
	I1007 13:37:57.635995 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-362969" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:57.832343 1760312 request.go:632] Waited for 196.266384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-362969
	I1007 13:37:57.832404 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-362969
	I1007 13:37:57.832410 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:57.832420 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:57.832424 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:57.835614 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:37:58.032063 1760312 request.go:632] Waited for 195.396814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:58.032193 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969
	I1007 13:37:58.032211 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:58.032221 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:58.032224 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:58.035214 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:58.036166 1760312 pod_ready.go:98] node "ha-362969" hosting pod "kube-scheduler-ha-362969" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-362969" has status "Ready":"Unknown"
	I1007 13:37:58.036232 1760312 pod_ready.go:82] duration metric: took 400.22009ms for pod "kube-scheduler-ha-362969" in "kube-system" namespace to be "Ready" ...
	E1007 13:37:58.036248 1760312 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-362969" hosting pod "kube-scheduler-ha-362969" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-362969" has status "Ready":"Unknown"
	I1007 13:37:58.036257 1760312 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-362969-m02" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:58.231628 1760312 request.go:632] Waited for 195.294771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-362969-m02
	I1007 13:37:58.231696 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-362969-m02
	I1007 13:37:58.231704 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:58.231731 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:58.231745 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:58.234810 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:58.431832 1760312 request.go:632] Waited for 196.384134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-362969-m02
	I1007 13:37:58.431905 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-362969-m02
	I1007 13:37:58.431913 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:58.431922 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:58.431929 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:58.434941 1760312 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 13:37:58.435655 1760312 pod_ready.go:93] pod "kube-scheduler-ha-362969-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 13:37:58.435675 1760312 pod_ready.go:82] duration metric: took 399.410394ms for pod "kube-scheduler-ha-362969-m02" in "kube-system" namespace to be "Ready" ...
	I1007 13:37:58.435687 1760312 pod_ready.go:39] duration metric: took 15.648640356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:37:58.435700 1760312 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:37:58.435766 1760312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:37:58.447142 1760312 system_svc.go:56] duration metric: took 11.433256ms WaitForService to wait for kubelet
	I1007 13:37:58.447211 1760312 kubeadm.go:582] duration metric: took 22.782822396s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:37:58.447237 1760312 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:37:58.631788 1760312 request.go:632] Waited for 184.478351ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1007 13:37:58.631867 1760312 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I1007 13:37:58.631878 1760312 round_trippers.go:469] Request Headers:
	I1007 13:37:58.631888 1760312 round_trippers.go:473]     Accept: application/json, */*
	I1007 13:37:58.631933 1760312 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 13:37:58.635326 1760312 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 13:37:58.636742 1760312 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 13:37:58.636775 1760312 node_conditions.go:123] node cpu capacity is 2
	I1007 13:37:58.636787 1760312 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 13:37:58.636812 1760312 node_conditions.go:123] node cpu capacity is 2
	I1007 13:37:58.636824 1760312 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 13:37:58.636829 1760312 node_conditions.go:123] node cpu capacity is 2
	I1007 13:37:58.636834 1760312 node_conditions.go:105] duration metric: took 189.590873ms to run NodePressure ...
	I1007 13:37:58.636845 1760312 start.go:241] waiting for startup goroutines ...
	I1007 13:37:58.636870 1760312 start.go:255] writing updated cluster config ...
	I1007 13:37:58.637232 1760312 ssh_runner.go:195] Run: rm -f paused
	I1007 13:37:58.709973 1760312 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:37:58.714692 1760312 out.go:177] * Done! kubectl is now configured to use "ha-362969" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 13:37:16 ha-362969 crio[644]: time="2024-10-07 13:37:16.067483332Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 07 13:37:16 ha-362969 crio[644]: time="2024-10-07 13:37:16.142733738Z" level=info msg="Created container fd373a9692e794a434bb49be84ace9bdd3d9c753aa171c7424d4c61843aa4ac9: kube-system/kube-apiserver-ha-362969/kube-apiserver" id=d043f0b5-7dd1-426f-a6d6-1a1afbe333e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 07 13:37:16 ha-362969 crio[644]: time="2024-10-07 13:37:16.143345945Z" level=info msg="Starting container: fd373a9692e794a434bb49be84ace9bdd3d9c753aa171c7424d4c61843aa4ac9" id=accb63b5-5a05-477d-a70b-d6dfa0ba881a name=/runtime.v1.RuntimeService/StartContainer
	Oct 07 13:37:16 ha-362969 crio[644]: time="2024-10-07 13:37:16.151721064Z" level=info msg="Started container" PID=1836 containerID=fd373a9692e794a434bb49be84ace9bdd3d9c753aa171c7424d4c61843aa4ac9 description=kube-system/kube-apiserver-ha-362969/kube-apiserver id=accb63b5-5a05-477d-a70b-d6dfa0ba881a name=/runtime.v1.RuntimeService/StartContainer sandboxID=a26524d904f4f1dcb4eedefdb9c74f43d6980559e2262bf3ab26e0d4f8d3468a
	Oct 07 13:37:19 ha-362969 conmon[947]: conmon 5a872e8f0c2a48e56561 <ninfo>: container 967 exited with status 1
	Oct 07 13:37:20 ha-362969 crio[644]: time="2024-10-07 13:37:20.076180208Z" level=info msg="Checking image status: ghcr.io/kube-vip/kube-vip:v0.8.3" id=50161180-3ca4-4265-9755-2c8a2d7d3df3 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 13:37:20 ha-362969 crio[644]: time="2024-10-07 13:37:20.076430572Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4eadde00b6c50b581474eaa28b09bfcdd974ccaab8bafac22b08e7d2ecd66fc1,RepoTags:[ghcr.io/kube-vip/kube-vip:v0.8.3],RepoDigests:[ghcr.io/kube-vip/kube-vip@sha256:54b8aed2f90c88c75900d8e434570a2a4975d7e035a674c4a2370733c4f76694 ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4],Size_:48793563,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=50161180-3ca4-4265-9755-2c8a2d7d3df3 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 13:37:20 ha-362969 crio[644]: time="2024-10-07 13:37:20.077072768Z" level=info msg="Checking image status: ghcr.io/kube-vip/kube-vip:v0.8.3" id=dd3b468f-015d-4e2f-b43c-08bd86a4098f name=/runtime.v1.ImageService/ImageStatus
	Oct 07 13:37:20 ha-362969 crio[644]: time="2024-10-07 13:37:20.077252915Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:4eadde00b6c50b581474eaa28b09bfcdd974ccaab8bafac22b08e7d2ecd66fc1,RepoTags:[ghcr.io/kube-vip/kube-vip:v0.8.3],RepoDigests:[ghcr.io/kube-vip/kube-vip@sha256:54b8aed2f90c88c75900d8e434570a2a4975d7e035a674c4a2370733c4f76694 ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4],Size_:48793563,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=dd3b468f-015d-4e2f-b43c-08bd86a4098f name=/runtime.v1.ImageService/ImageStatus
	Oct 07 13:37:20 ha-362969 crio[644]: time="2024-10-07 13:37:20.077964549Z" level=info msg="Creating container: kube-system/kube-vip-ha-362969/kube-vip" id=f72612d6-e0c0-4093-9060-dc4da6d61a15 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 07 13:37:20 ha-362969 crio[644]: time="2024-10-07 13:37:20.078064706Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 07 13:37:20 ha-362969 crio[644]: time="2024-10-07 13:37:20.095187693Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/375f5faa89851082295808beb962d5881fd28f4eda1b5fe460e5bcd4d2b9ed8e/merged/etc/passwd: no such file or directory"
	Oct 07 13:37:20 ha-362969 crio[644]: time="2024-10-07 13:37:20.095236348Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/375f5faa89851082295808beb962d5881fd28f4eda1b5fe460e5bcd4d2b9ed8e/merged/etc/group: no such file or directory"
	Oct 07 13:37:20 ha-362969 crio[644]: time="2024-10-07 13:37:20.152473425Z" level=info msg="Created container 3eb3b7fd6e3ba84baf2742d5486729d5e42aa5acec48c30b49cbce0efa3e6ef3: kube-system/kube-vip-ha-362969/kube-vip" id=f72612d6-e0c0-4093-9060-dc4da6d61a15 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 07 13:37:20 ha-362969 crio[644]: time="2024-10-07 13:37:20.153069830Z" level=info msg="Starting container: 3eb3b7fd6e3ba84baf2742d5486729d5e42aa5acec48c30b49cbce0efa3e6ef3" id=05433008-4d19-421d-9ae4-0ea600c9d35a name=/runtime.v1.RuntimeService/StartContainer
	Oct 07 13:37:20 ha-362969 crio[644]: time="2024-10-07 13:37:20.159239484Z" level=info msg="Started container" PID=1888 containerID=3eb3b7fd6e3ba84baf2742d5486729d5e42aa5acec48c30b49cbce0efa3e6ef3 description=kube-system/kube-vip-ha-362969/kube-vip id=05433008-4d19-421d-9ae4-0ea600c9d35a name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4bbee75b84653a0ae0e6e3b71c9b2091503cd84f0bc0c736fbe58a8ecce556c
	Oct 07 13:37:26 ha-362969 crio[644]: time="2024-10-07 13:37:26.808796072Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.1" id=051a280b-1c1a-4a3a-b658-c5f3edc9a524 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 13:37:26 ha-362969 crio[644]: time="2024-10-07 13:37:26.809008685Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1 registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=051a280b-1c1a-4a3a-b658-c5f3edc9a524 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 13:37:26 ha-362969 crio[644]: time="2024-10-07 13:37:26.810145891Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.1" id=2b8e3f43-544b-4e8c-8c17-ef1ff25b8db6 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 13:37:26 ha-362969 crio[644]: time="2024-10-07 13:37:26.810325881Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1 registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=2b8e3f43-544b-4e8c-8c17-ef1ff25b8db6 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 13:37:26 ha-362969 crio[644]: time="2024-10-07 13:37:26.811068111Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-362969/kube-controller-manager" id=e5bf32de-64b8-4a55-864d-84f788213326 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 07 13:37:26 ha-362969 crio[644]: time="2024-10-07 13:37:26.811162886Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 07 13:37:26 ha-362969 crio[644]: time="2024-10-07 13:37:26.887468939Z" level=info msg="Created container 0e6adae3ac6cefac7ff97a0138c4eb31908cc6af8f0e3c0ea48d69987d26bd09: kube-system/kube-controller-manager-ha-362969/kube-controller-manager" id=e5bf32de-64b8-4a55-864d-84f788213326 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 07 13:37:26 ha-362969 crio[644]: time="2024-10-07 13:37:26.888365265Z" level=info msg="Starting container: 0e6adae3ac6cefac7ff97a0138c4eb31908cc6af8f0e3c0ea48d69987d26bd09" id=b04f97da-8b07-4690-b330-a9f117df58e9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 07 13:37:26 ha-362969 crio[644]: time="2024-10-07 13:37:26.895357993Z" level=info msg="Started container" PID=1931 containerID=0e6adae3ac6cefac7ff97a0138c4eb31908cc6af8f0e3c0ea48d69987d26bd09 description=kube-system/kube-controller-manager-ha-362969/kube-controller-manager id=b04f97da-8b07-4690-b330-a9f117df58e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=62543047cdf1af3ccb4fbe7055dadd8db122611d5ef973152e05ee5cc75a77ef
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0e6adae3ac6ce       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   34 seconds ago       Running             kube-controller-manager   8                   62543047cdf1a       kube-controller-manager-ha-362969
	3eb3b7fd6e3ba       4eadde00b6c50b581474eaa28b09bfcdd974ccaab8bafac22b08e7d2ecd66fc1   41 seconds ago       Running             kube-vip                  3                   a4bbee75b8465       kube-vip-ha-362969
	fd373a9692e79       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   45 seconds ago       Running             kube-apiserver            4                   a26524d904f4f       kube-apiserver-ha-362969
	8c0dc9ea5e9bd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Running             storage-provisioner       5                   9fc1ad6dd5f35       storage-provisioner
	37c605aef5be2       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   About a minute ago   Exited              kube-controller-manager   7                   62543047cdf1a       kube-controller-manager-ha-362969
	0c7f727411717       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   About a minute ago   Running             coredns                   2                   462ac0c07252c       coredns-7c65d6cfc9-v7rpb
	539bc8f0ab19c       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   About a minute ago   Running             coredns                   2                   5fabbeb556481       coredns-7c65d6cfc9-kjxj5
	863a39198e6b6       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   About a minute ago   Running             busybox                   2                   6945e2576d37a       busybox-7dff88458-c7s47
	913bfbed974bd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Exited              storage-provisioner       4                   9fc1ad6dd5f35       storage-provisioner
	eca1129019b9e       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d   About a minute ago   Running             kube-proxy                2                   4e64a7c5a7170       kube-proxy-vxzkt
	2ae65e236f395       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51   About a minute ago   Running             kindnet-cni               2                   0cacfe2bc53d0       kindnet-2pfgm
	0cd11b4372487       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   About a minute ago   Exited              kube-apiserver            3                   a26524d904f4f       kube-apiserver-ha-362969
	5a872e8f0c2a4       4eadde00b6c50b581474eaa28b09bfcdd974ccaab8bafac22b08e7d2ecd66fc1   About a minute ago   Exited              kube-vip                  2                   a4bbee75b8465       kube-vip-ha-362969
	5bee601fa21af       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d   About a minute ago   Running             kube-scheduler            2                   57ab9ee45e519       kube-scheduler-ha-362969
	86b92247effe5       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da   About a minute ago   Running             etcd                      2                   073a6148f6eeb       etcd-ha-362969
	
	
	==> coredns [0c7f727411717b5e97f30729c4885f5a4538b65e71996f3d9d44875d6b761229] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:57879 - 4766 "HINFO IN 4549309376048101916.8294362496394010281. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.043625832s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2099377214]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 13:36:31.010) (total time: 30001ms):
	Trace[2099377214]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (13:37:01.010)
	Trace[2099377214]: [30.001024394s] [30.001024394s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2084114193]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 13:36:31.011) (total time: 30000ms):
	Trace[2084114193]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (13:37:01.011)
	Trace[2084114193]: [30.000558272s] [30.000558272s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[737109425]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 13:36:31.010) (total time: 30000ms):
	Trace[737109425]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (13:37:01.011)
	Trace[737109425]: [30.000823438s] [30.000823438s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [539bc8f0ab19c268f2117ab389767673ea5e8ade5d07bb916fe5fa8aa05dc969] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52608 - 38380 "HINFO IN 8829685171436480349.4100183794854369285. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021972017s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2124381574]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 13:36:30.850) (total time: 30003ms):
	Trace[2124381574]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (13:37:00.853)
	Trace[2124381574]: [30.003281215s] [30.003281215s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1931975225]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 13:36:30.849) (total time: 30004ms):
	Trace[1931975225]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (13:37:00.853)
	Trace[1931975225]: [30.004061131s] [30.004061131s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[102627925]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 13:36:30.849) (total time: 30004ms):
	Trace[102627925]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (13:37:00.853)
	Trace[102627925]: [30.004637264s] [30.004637264s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-362969
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-362969
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=ha-362969
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T13_26_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 13:26:18 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-362969
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 13:37:11 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 07 Oct 2024 13:36:30 +0000   Mon, 07 Oct 2024 13:37:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 07 Oct 2024 13:36:30 +0000   Mon, 07 Oct 2024 13:37:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 07 Oct 2024 13:36:30 +0000   Mon, 07 Oct 2024 13:37:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 07 Oct 2024 13:36:30 +0000   Mon, 07 Oct 2024 13:37:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-362969
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c4396603c0e4dfb8b9d2408da3d77f3
	  System UUID:                e1f1cf65-1fa2-4193-99cf-1ef72f2e4546
	  Boot ID:                    aa802e8e-7a27-4e80-bbf6-ed0c45666ec2
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-c7s47              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 coredns-7c65d6cfc9-kjxj5             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 coredns-7c65d6cfc9-v7rpb             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 etcd-ha-362969                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-2pfgm                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-362969             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-362969    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-vxzkt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-362969             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-362969                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 90s                    kube-proxy       
	  Normal   Starting                 5m26s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  11m                    kubelet          Node ha-362969 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 11m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    11m                    kubelet          Node ha-362969 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                    kubelet          Node ha-362969 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                    node-controller  Node ha-362969 event: Registered Node ha-362969 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-362969 event: Registered Node ha-362969 in Controller
	  Normal   NodeReady                10m                    kubelet          Node ha-362969 status is now: NodeReady
	  Normal   RegisteredNode           9m58s                  node-controller  Node ha-362969 event: Registered Node ha-362969 in Controller
	  Normal   RegisteredNode           6m56s                  node-controller  Node ha-362969 event: Registered Node ha-362969 in Controller
	  Normal   NodeHasSufficientPID     6m18s (x7 over 6m18s)  kubelet          Node ha-362969 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    6m18s (x8 over 6m18s)  kubelet          Node ha-362969 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  6m18s (x8 over 6m18s)  kubelet          Node ha-362969 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 6m18s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 6m18s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           5m37s                  node-controller  Node ha-362969 event: Registered Node ha-362969 in Controller
	  Normal   RegisteredNode           4m38s                  node-controller  Node ha-362969 event: Registered Node ha-362969 in Controller
	  Normal   RegisteredNode           3m39s                  node-controller  Node ha-362969 event: Registered Node ha-362969 in Controller
	  Normal   Starting                 119s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 119s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  119s (x8 over 119s)    kubelet          Node ha-362969 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    119s (x8 over 119s)    kubelet          Node ha-362969 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     119s (x7 over 119s)    kubelet          Node ha-362969 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           80s                    node-controller  Node ha-362969 event: Registered Node ha-362969 in Controller
	  Normal   RegisteredNode           31s                    node-controller  Node ha-362969 event: Registered Node ha-362969 in Controller
	  Normal   NodeNotReady             5s                     node-controller  Node ha-362969 status is now: NodeNotReady
	
	
	Name:               ha-362969-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-362969-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=ha-362969
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T13_26_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 13:26:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-362969-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 13:38:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 13:36:24 +0000   Mon, 07 Oct 2024 13:26:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 13:36:24 +0000   Mon, 07 Oct 2024 13:26:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 13:36:24 +0000   Mon, 07 Oct 2024 13:26:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 13:36:24 +0000   Mon, 07 Oct 2024 13:27:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-362969-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 23e3f538cde24c208403fa3578523bea
	  System UUID:                3d5eeb88-a7a9-416e-b73e-00b9ff026546
	  Boot ID:                    aa802e8e-7a27-4e80-bbf6-ed0c45666ec2
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wwxsq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 etcd-ha-362969-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-xc9st                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-362969-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-362969-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-qxlrd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-362969-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-362969-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 6m59s                  kube-proxy       
	  Normal   Starting                 4m45s                  kube-proxy       
	  Normal   Starting                 67s                    kube-proxy       
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node ha-362969-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-362969-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-362969-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           11m                    node-controller  Node ha-362969-m02 event: Registered Node ha-362969-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-362969-m02 event: Registered Node ha-362969-m02 in Controller
	  Normal   RegisteredNode           9m58s                  node-controller  Node ha-362969-m02 event: Registered Node ha-362969-m02 in Controller
	  Normal   NodeHasSufficientPID     7m28s (x7 over 7m28s)  kubelet          Node ha-362969-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    7m28s (x8 over 7m28s)  kubelet          Node ha-362969-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 7m28s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m28s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m28s (x8 over 7m28s)  kubelet          Node ha-362969-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           6m56s                  node-controller  Node ha-362969-m02 event: Registered Node ha-362969-m02 in Controller
	  Normal   NodeHasSufficientMemory  6m16s (x8 over 6m16s)  kubelet          Node ha-362969-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     6m16s (x7 over 6m16s)  kubelet          Node ha-362969-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    6m16s (x8 over 6m16s)  kubelet          Node ha-362969-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 6m16s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m16s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           5m37s                  node-controller  Node ha-362969-m02 event: Registered Node ha-362969-m02 in Controller
	  Normal   RegisteredNode           4m38s                  node-controller  Node ha-362969-m02 event: Registered Node ha-362969-m02 in Controller
	  Normal   RegisteredNode           3m39s                  node-controller  Node ha-362969-m02 event: Registered Node ha-362969-m02 in Controller
	  Normal   Starting                 117s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 117s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  117s (x8 over 117s)    kubelet          Node ha-362969-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    117s (x8 over 117s)    kubelet          Node ha-362969-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     117s (x7 over 117s)    kubelet          Node ha-362969-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           80s                    node-controller  Node ha-362969-m02 event: Registered Node ha-362969-m02 in Controller
	  Normal   RegisteredNode           31s                    node-controller  Node ha-362969-m02 event: Registered Node ha-362969-m02 in Controller
	
	
	Name:               ha-362969-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-362969-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=ha-362969
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T13_29_08_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 13:29:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-362969-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 13:37:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 13:37:42 +0000   Mon, 07 Oct 2024 13:37:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 13:37:42 +0000   Mon, 07 Oct 2024 13:37:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 13:37:42 +0000   Mon, 07 Oct 2024 13:37:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 13:37:42 +0000   Mon, 07 Oct 2024 13:37:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-362969-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 3344af8aa3194968b246d09dc1de3970
	  System UUID:                fea1ea21-37af-43f6-a603-406b33ffc017
	  Boot ID:                    aa802e8e-7a27-4e80-bbf6-ed0c45666ec2
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-js2s6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  kube-system                 kindnet-4rw9w              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m54s
	  kube-system                 kube-proxy-jwdpx           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m49s                  kube-proxy       
	  Normal   Starting                 9s                     kube-proxy       
	  Normal   Starting                 2m57s                  kube-proxy       
	  Normal   NodeHasSufficientPID     8m54s (x2 over 8m54s)  kubelet          Node ha-362969-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    8m54s (x2 over 8m54s)  kubelet          Node ha-362969-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  8m54s (x2 over 8m54s)  kubelet          Node ha-362969-m04 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 8m54s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           8m53s                  node-controller  Node ha-362969-m04 event: Registered Node ha-362969-m04 in Controller
	  Normal   RegisteredNode           8m53s                  node-controller  Node ha-362969-m04 event: Registered Node ha-362969-m04 in Controller
	  Normal   CIDRAssignmentFailed     8m53s                  cidrAllocator    Node ha-362969-m04 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           8m52s                  node-controller  Node ha-362969-m04 event: Registered Node ha-362969-m04 in Controller
	  Normal   NodeReady                8m11s                  kubelet          Node ha-362969-m04 status is now: NodeReady
	  Normal   RegisteredNode           6m56s                  node-controller  Node ha-362969-m04 event: Registered Node ha-362969-m04 in Controller
	  Normal   RegisteredNode           5m37s                  node-controller  Node ha-362969-m04 event: Registered Node ha-362969-m04 in Controller
	  Normal   NodeNotReady             4m57s                  node-controller  Node ha-362969-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           4m38s                  node-controller  Node ha-362969-m04 event: Registered Node ha-362969-m04 in Controller
	  Normal   RegisteredNode           3m39s                  node-controller  Node ha-362969-m04 event: Registered Node ha-362969-m04 in Controller
	  Warning  CgroupV1                 3m28s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 3m28s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     3m21s (x7 over 3m28s)  kubelet          Node ha-362969-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  3m15s (x8 over 3m28s)  kubelet          Node ha-362969-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m15s (x8 over 3m28s)  kubelet          Node ha-362969-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           80s                    node-controller  Node ha-362969-m04 event: Registered Node ha-362969-m04 in Controller
	  Normal   NodeNotReady             40s                    node-controller  Node ha-362969-m04 status is now: NodeNotReady
	  Normal   Starting                 32s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 32s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           31s                    node-controller  Node ha-362969-m04 event: Registered Node ha-362969-m04 in Controller
	  Normal   NodeHasSufficientPID     26s (x7 over 32s)      kubelet          Node ha-362969-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  19s (x8 over 32s)      kubelet          Node ha-362969-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19s (x8 over 32s)      kubelet          Node ha-362969-m04 status is now: NodeHasNoDiskPressure
	
	
	==> dmesg <==
	
	
	==> etcd [86b92247effe5614533225735ffe144622cb22c57be7f0a88ecc40cf924f95ac] <==
	{"level":"info","ts":"2024-10-07T13:36:22.953453Z","caller":"traceutil/trace.go:171","msg":"trace[672753884] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; }","duration":"7.556847237s","start":"2024-10-07T13:36:15.396602Z","end":"2024-10-07T13:36:22.953449Z","steps":["trace[672753884] 'agreement among raft nodes before linearized reading'  (duration: 7.511176427s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:36:22.953474Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T13:36:15.396588Z","time spent":"7.556880565s","remote":"127.0.0.1:57450","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":0,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:500 "}
	{"level":"info","ts":"2024-10-07T13:36:22.953497Z","caller":"traceutil/trace.go:171","msg":"trace[420702817] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; }","duration":"7.556912991s","start":"2024-10-07T13:36:15.396580Z","end":"2024-10-07T13:36:22.953493Z","steps":["trace[420702817] 'agreement among raft nodes before linearized reading'  (duration: 7.5112106s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:36:22.953516Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T13:36:15.396557Z","time spent":"7.556954163s","remote":"127.0.0.1:57284","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" limit:500 "}
	{"level":"info","ts":"2024-10-07T13:36:22.953534Z","caller":"traceutil/trace.go:171","msg":"trace[6062905] range","detail":"{range_begin:/registry/validatingadmissionpolicybindings/; range_end:/registry/validatingadmissionpolicybindings0; }","duration":"7.556982889s","start":"2024-10-07T13:36:15.396548Z","end":"2024-10-07T13:36:22.953531Z","steps":["trace[6062905] 'agreement among raft nodes before linearized reading'  (duration: 7.511253331s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:36:22.953560Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T13:36:15.396544Z","time spent":"7.557003943s","remote":"127.0.0.1:57478","response type":"/etcdserverpb.KV/Range","request count":0,"request size":95,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" limit:500 "}
	{"level":"info","ts":"2024-10-07T13:36:22.953585Z","caller":"traceutil/trace.go:171","msg":"trace[1636329926] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; }","duration":"7.55704614s","start":"2024-10-07T13:36:15.396535Z","end":"2024-10-07T13:36:22.953581Z","steps":["trace[1636329926] 'agreement among raft nodes before linearized reading'  (duration: 7.511279767s)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:36:22.953719Z","caller":"traceutil/trace.go:171","msg":"trace[1580954038] range","detail":"{range_begin:/registry/leases/kube-system/apiserver-fhvuoe2ohavimijbvu42ug6fn4; range_end:; }","duration":"7.557201271s","start":"2024-10-07T13:36:15.396510Z","end":"2024-10-07T13:36:22.953711Z","steps":["trace[1580954038] 'agreement among raft nodes before linearized reading'  (duration: 7.511315598s)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:36:22.953765Z","caller":"traceutil/trace.go:171","msg":"trace[1117440018] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; }","duration":"7.565270248s","start":"2024-10-07T13:36:15.388489Z","end":"2024-10-07T13:36:22.953760Z","steps":["trace[1117440018] 'agreement among raft nodes before linearized reading'  (duration: 7.519346833s)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:36:22.953798Z","caller":"traceutil/trace.go:171","msg":"trace[1981089825] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; }","duration":"7.581997276s","start":"2024-10-07T13:36:15.371798Z","end":"2024-10-07T13:36:22.953795Z","steps":["trace[1981089825] 'agreement among raft nodes before linearized reading'  (duration: 7.536050591s)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:36:22.953827Z","caller":"traceutil/trace.go:171","msg":"trace[1151320539] range","detail":"{range_begin:/registry/minions/ha-362969-m02; range_end:; }","duration":"7.582118249s","start":"2024-10-07T13:36:15.371706Z","end":"2024-10-07T13:36:22.953824Z","steps":["trace[1151320539] 'agreement among raft nodes before linearized reading'  (duration: 7.536153563s)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:36:22.953854Z","caller":"traceutil/trace.go:171","msg":"trace[1412879045] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; }","duration":"7.585765445s","start":"2024-10-07T13:36:15.368085Z","end":"2024-10-07T13:36:22.953850Z","steps":["trace[1412879045] 'agreement among raft nodes before linearized reading'  (duration: 7.539788812s)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:36:22.953881Z","caller":"traceutil/trace.go:171","msg":"trace[29215400] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; }","duration":"8.42601973s","start":"2024-10-07T13:36:14.527858Z","end":"2024-10-07T13:36:22.953877Z","steps":["trace[29215400] 'agreement among raft nodes before linearized reading'  (duration: 8.380026754s)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:36:22.953909Z","caller":"traceutil/trace.go:171","msg":"trace[532038802] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; }","duration":"8.426076213s","start":"2024-10-07T13:36:14.527825Z","end":"2024-10-07T13:36:22.953901Z","steps":["trace[532038802] 'agreement among raft nodes before linearized reading'  (duration: 8.380073046s)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:36:22.953938Z","caller":"traceutil/trace.go:171","msg":"trace[1276149544] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; }","duration":"8.619043745s","start":"2024-10-07T13:36:14.334890Z","end":"2024-10-07T13:36:22.953934Z","steps":["trace[1276149544] 'agreement among raft nodes before linearized reading'  (duration: 8.573019959s)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:36:22.953967Z","caller":"traceutil/trace.go:171","msg":"trace[583112903] range","detail":"{range_begin:/registry/validatingadmissionpolicybindings/; range_end:/registry/validatingadmissionpolicybindings0; }","duration":"8.653322575s","start":"2024-10-07T13:36:14.300640Z","end":"2024-10-07T13:36:22.953963Z","steps":["trace[583112903] 'agreement among raft nodes before linearized reading'  (duration: 8.607281756s)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:36:22.953999Z","caller":"traceutil/trace.go:171","msg":"trace[117261796] range","detail":"{range_begin:/registry/validatingadmissionpolicybindings/; range_end:/registry/validatingadmissionpolicybindings0; }","duration":"8.653385327s","start":"2024-10-07T13:36:14.300610Z","end":"2024-10-07T13:36:22.953995Z","steps":["trace[117261796] 'agreement among raft nodes before linearized reading'  (duration: 8.607324717s)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:36:22.954025Z","caller":"traceutil/trace.go:171","msg":"trace[171203853] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; }","duration":"8.722010046s","start":"2024-10-07T13:36:14.232011Z","end":"2024-10-07T13:36:22.954021Z","steps":["trace[171203853] 'agreement among raft nodes before linearized reading'  (duration: 8.675934733s)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:36:22.954053Z","caller":"traceutil/trace.go:171","msg":"trace[1702089944] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; }","duration":"8.722074151s","start":"2024-10-07T13:36:14.231975Z","end":"2024-10-07T13:36:22.954050Z","steps":["trace[1702089944] 'agreement among raft nodes before linearized reading'  (duration: 8.675985275s)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:36:22.954081Z","caller":"traceutil/trace.go:171","msg":"trace[1100278654] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; }","duration":"7.557407141s","start":"2024-10-07T13:36:15.396670Z","end":"2024-10-07T13:36:22.954078Z","steps":["trace[1100278654] 'agreement among raft nodes before linearized reading'  (duration: 7.51107105s)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:36:22.954143Z","caller":"traceutil/trace.go:171","msg":"trace[1893583629] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; }","duration":"7.55749819s","start":"2024-10-07T13:36:15.396642Z","end":"2024-10-07T13:36:22.954140Z","steps":["trace[1893583629] 'agreement among raft nodes before linearized reading'  (duration: 7.511112698s)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:36:22.954230Z","caller":"traceutil/trace.go:171","msg":"trace[202000637] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; }","duration":"7.557605815s","start":"2024-10-07T13:36:15.396619Z","end":"2024-10-07T13:36:22.954225Z","steps":["trace[202000637] 'agreement among raft nodes before linearized reading'  (duration: 7.511146536s)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T13:36:22.954312Z","caller":"traceutil/trace.go:171","msg":"trace[1858983402] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; }","duration":"9.726877794s","start":"2024-10-07T13:36:13.227430Z","end":"2024-10-07T13:36:22.954308Z","steps":["trace[1858983402] 'agreement among raft nodes before linearized reading'  (duration: 9.677972282s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T13:36:22.964832Z","caller":"etcdserver/v3_server.go:897","msg":"ignored out-of-date read index response; local node read indexes queueing up and waiting to be in sync with leader","sent-request-id":8128032399411890444,"received-request-id":8128032399411890443}
	{"level":"info","ts":"2024-10-07T13:36:30.813746Z","caller":"traceutil/trace.go:171","msg":"trace[949832327] transaction","detail":"{read_only:false; response_revision:2968; number_of_response:1; }","duration":"105.728661ms","start":"2024-10-07T13:36:30.708001Z","end":"2024-10-07T13:36:30.813730Z","steps":["trace[949832327] 'process raft request'  (duration: 103.097423ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:38:02 up 1 day,  3:20,  0 users,  load average: 1.68, 2.02, 1.79
	Linux ha-362969 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [2ae65e236f395e935a0cf2f762850849051b44e91bb5ae525a7f9e9ff67af6d1] <==
	I1007 13:37:21.117878       1 main.go:322] Node ha-362969-m04 has CIDR [10.244.3.0/24] 
	I1007 13:37:31.118015       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:37:31.118047       1 main.go:299] handling current node
	I1007 13:37:31.118063       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I1007 13:37:31.118070       1 main.go:322] Node ha-362969-m02 has CIDR [10.244.1.0/24] 
	I1007 13:37:31.118265       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I1007 13:37:31.118294       1 main.go:322] Node ha-362969-m04 has CIDR [10.244.3.0/24] 
	I1007 13:37:41.123936       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:37:41.124064       1 main.go:299] handling current node
	I1007 13:37:41.124088       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I1007 13:37:41.124120       1 main.go:322] Node ha-362969-m02 has CIDR [10.244.1.0/24] 
	I1007 13:37:41.124230       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I1007 13:37:41.124274       1 main.go:322] Node ha-362969-m04 has CIDR [10.244.3.0/24] 
	I1007 13:37:51.121804       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:37:51.121935       1 main.go:299] handling current node
	I1007 13:37:51.121961       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I1007 13:37:51.121971       1 main.go:322] Node ha-362969-m02 has CIDR [10.244.1.0/24] 
	I1007 13:37:51.122088       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I1007 13:37:51.122102       1 main.go:322] Node ha-362969-m04 has CIDR [10.244.3.0/24] 
	I1007 13:38:01.118581       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:38:01.118669       1 main.go:299] handling current node
	I1007 13:38:01.118696       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I1007 13:38:01.118703       1 main.go:322] Node ha-362969-m02 has CIDR [10.244.1.0/24] 
	I1007 13:38:01.118855       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I1007 13:38:01.118869       1 main.go:322] Node ha-362969-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0cd11b4372487d985f9e11b958c54b619be021339d7a124086c04d6998538bdd] <==
	E1007 13:36:22.962013       1 cacher.go:478] cacher (limitranges): unexpected ListAndWatch error: failed to list *core.LimitRange: etcdserver: leader changed; reinitializing...
	E1007 13:36:22.962632       1 controller.go:145] "Failed to ensure lease exists, will retry" err="etcdserver: leader changed" interval="200ms"
	W1007 13:36:22.962706       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PriorityClass: etcdserver: leader changed
	E1007 13:36:22.962728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PriorityClass: failed to list *v1.PriorityClass: etcdserver: leader changed" logger="UnhandledError"
	W1007 13:36:22.962787       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.FlowSchema: etcdserver: leader changed
	E1007 13:36:22.962804       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.FlowSchema: failed to list *v1.FlowSchema: etcdserver: leader changed" logger="UnhandledError"
	I1007 13:36:23.333358       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1007 13:36:23.943304       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1007 13:36:23.980302       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1007 13:36:23.980408       1 policy_source.go:224] refreshing policies
	W1007 13:36:24.053111       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1007 13:36:24.056971       1 controller.go:615] quota admission added evaluator for: endpoints
	I1007 13:36:24.067437       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1007 13:36:24.070642       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1007 13:36:24.075151       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1007 13:36:24.332912       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1007 13:36:24.333020       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1007 13:36:24.355753       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1007 13:36:24.391386       1 cache.go:39] Caches are synced for autoregister controller
	I1007 13:36:24.432557       1 shared_informer.go:320] Caches are synced for configmaps
	I1007 13:36:24.432594       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1007 13:36:24.438846       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1007 13:36:24.443844       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1007 13:36:24.444222       1 cache.go:39] Caches are synced for LocalAvailability controller
	F1007 13:37:15.343752       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-apiserver [fd373a9692e794a434bb49be84ace9bdd3d9c753aa171c7424d4c61843aa4ac9] <==
	I1007 13:37:19.664200       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I1007 13:37:19.664229       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1007 13:37:19.664245       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1007 13:37:19.698100       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1007 13:37:19.714114       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1007 13:37:19.714246       1 policy_source.go:224] refreshing policies
	I1007 13:37:19.728107       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1007 13:37:19.728261       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1007 13:37:19.733961       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1007 13:37:19.735805       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1007 13:37:19.735882       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1007 13:37:19.735993       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1007 13:37:19.736042       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1007 13:37:19.741458       1 aggregator.go:171] initial CRD sync complete...
	I1007 13:37:19.741790       1 autoregister_controller.go:144] Starting autoregister controller
	I1007 13:37:19.741823       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1007 13:37:19.741840       1 cache.go:39] Caches are synced for autoregister controller
	I1007 13:37:19.741994       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1007 13:37:19.744089       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1007 13:37:19.744114       1 shared_informer.go:320] Caches are synced for configmaps
	http2: server: error reading preface from client 127.0.0.1:56570: read tcp 127.0.0.1:8443->127.0.0.1:56570: read: connection reset by peer
	I1007 13:37:20.330787       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1007 13:37:20.975171       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1007 13:37:20.976738       1 controller.go:615] quota admission added evaluator for: endpoints
	I1007 13:37:20.985247       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0e6adae3ac6cefac7ff97a0138c4eb31908cc6af8f0e3c0ea48d69987d26bd09] <==
	I1007 13:37:31.069340       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="70.964µs"
	I1007 13:37:31.069449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="108.124µs"
	I1007 13:37:31.469497       1 shared_informer.go:320] Caches are synced for garbage collector
	I1007 13:37:31.520434       1 shared_informer.go:320] Caches are synced for garbage collector
	I1007 13:37:31.520462       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1007 13:37:42.381107       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-362969-m04"
	I1007 13:37:42.381842       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-362969-m04"
	I1007 13:37:42.398390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-362969-m04"
	I1007 13:37:45.965287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-362969-m04"
	I1007 13:37:51.508511       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.216µs"
	I1007 13:37:52.753335       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="85.057048ms"
	I1007 13:37:52.753440       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.755µs"
	I1007 13:37:56.688408       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-362969-m04"
	I1007 13:37:56.688426       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-362969"
	I1007 13:37:56.704963       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-362969"
	I1007 13:37:56.743191       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="20.084906ms"
	I1007 13:37:56.758865       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.96µs"
	I1007 13:37:56.791310       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-5c9cw EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-5c9cw\": the object has been modified; please apply your changes to the latest version and try again"
	I1007 13:37:56.791600       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"2b565db4-5a35-48c4-94ef-66c72d835cf0", APIVersion:"v1", ResourceVersion:"241", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-5c9cw EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-5c9cw": the object has been modified; please apply your changes to the latest version and try again
	I1007 13:37:56.870555       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.916356ms"
	I1007 13:37:56.870753       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.466µs"
	I1007 13:37:57.009662       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="20.821189ms"
	I1007 13:37:57.010510       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="122.712µs"
	I1007 13:38:01.047719       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-362969"
	I1007 13:38:02.107954       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-362969"
	
	
	==> kube-controller-manager [37c605aef5be2cfc31e3de00d76a4dd1c0bc93bbe009748d3ef6d76cb537aff3] <==
	I1007 13:36:55.828886       1 serving.go:386] Generated self-signed cert in-memory
	I1007 13:36:56.730461       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1007 13:36:56.730560       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 13:36:56.732191       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1007 13:36:56.732303       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1007 13:36:56.732611       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1007 13:36:56.732688       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1007 13:37:06.750173       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [eca1129019b9efa5c43a52a4e2cb64f9a710a2a1d7f2b8f248161d7f58b76835] <==
	I1007 13:36:31.268530       1 server_linux.go:66] "Using iptables proxy"
	I1007 13:36:31.383306       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1007 13:36:31.383375       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 13:36:31.428504       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1007 13:36:31.428633       1 server_linux.go:169] "Using iptables Proxier"
	I1007 13:36:31.432330       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 13:36:31.432769       1 server.go:483] "Version info" version="v1.31.1"
	I1007 13:36:31.432826       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 13:36:31.442601       1 config.go:199] "Starting service config controller"
	I1007 13:36:31.442725       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 13:36:31.442781       1 config.go:105] "Starting endpoint slice config controller"
	I1007 13:36:31.442811       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 13:36:31.443378       1 config.go:328] "Starting node config controller"
	I1007 13:36:31.449930       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 13:36:31.543395       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 13:36:31.543452       1 shared_informer.go:320] Caches are synced for service config
	I1007 13:36:31.550484       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5bee601fa21aff2654e944157a9b8fc1452fa86c6e3568e33fb8c81018eb90b3] <==
	W1007 13:36:23.612371       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 13:36:23.612429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 13:36:23.759899       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1007 13:36:23.759944       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 13:36:23.790506       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 13:36:23.790648       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1007 13:36:24.095711       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 13:36:24.095772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 13:36:24.296218       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E1007 13:36:24.296254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError"
	I1007 13:36:34.827200       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1007 13:37:19.693519       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59336->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 13:37:19.703046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:59330->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 13:37:19.703194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:59404->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 13:37:19.703274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:59376->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 13:37:19.704539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:59372->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 13:37:19.711678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:59362->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 13:37:19.711849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:59350->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 13:37:19.711942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:59292->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 13:37:19.712036       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:59284->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 13:37:19.712121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:59276->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 13:37:19.712224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59314->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 13:37:19.712368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59344->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 13:37:19.714185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:59392->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E1007 13:37:19.719775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:59300->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	
	
	==> kubelet <==
	Oct 07 13:37:12 ha-362969 kubelet[759]: I1007 13:37:12.754485     759 scope.go:117] "RemoveContainer" containerID="37c605aef5be2cfc31e3de00d76a4dd1c0bc93bbe009748d3ef6d76cb537aff3"
	Oct 07 13:37:12 ha-362969 kubelet[759]: E1007 13:37:12.754666     759 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-362969_kube-system(a9168899bccb1402519e7acfb110ebfc)\"" pod="kube-system/kube-controller-manager-ha-362969" podUID="a9168899bccb1402519e7acfb110ebfc"
	Oct 07 13:37:12 ha-362969 kubelet[759]: E1007 13:37:12.881229     759 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308232881044380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:37:12 ha-362969 kubelet[759]: E1007 13:37:12.881267     759 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308232881044380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:37:16 ha-362969 kubelet[759]: I1007 13:37:16.064854     759 scope.go:117] "RemoveContainer" containerID="0cd11b4372487d985f9e11b958c54b619be021339d7a124086c04d6998538bdd"
	Oct 07 13:37:16 ha-362969 kubelet[759]: I1007 13:37:16.065464     759 status_manager.go:851] "Failed to get status for pod" podUID="ec9d9e1199c58964339c9f23fb32d57e" pod="kube-system/kube-apiserver-ha-362969" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-362969\": dial tcp 192.168.49.254:8443: connect: connection refused"
	Oct 07 13:37:16 ha-362969 kubelet[759]: E1007 13:37:16.066905     759 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-362969.17fc2f1f82441544\": dial tcp 192.168.49.254:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-ha-362969.17fc2f1f82441544  kube-system   2929 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-362969,UID:ec9d9e1199c58964339c9f23fb32d57e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.31.1\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-362969,},FirstTimestamp:2024-10-07 13:36:09 +0000 UTC,LastTimestamp:2024-10-07 13:37:16.065942446 +0000 UTC m=+73.423800073,Count:2,Type:Normal,EventTime:0001-01-01 00:00
:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-362969,}"
	Oct 07 13:37:19 ha-362969 kubelet[759]: E1007 13:37:19.490277     759 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:45678->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Oct 07 13:37:19 ha-362969 kubelet[759]: E1007 13:37:19.491000     759 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:45686->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Oct 07 13:37:19 ha-362969 kubelet[759]: E1007 13:37:19.491186     759 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:45620->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Oct 07 13:37:19 ha-362969 kubelet[759]: E1007 13:37:19.491584     759 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:45718->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Oct 07 13:37:20 ha-362969 kubelet[759]: I1007 13:37:20.075444     759 scope.go:117] "RemoveContainer" containerID="5a872e8f0c2a48e56561aa2b475452e8aa366a91d8f8a8b9aaaa8f6835e3e433"
	Oct 07 13:37:22 ha-362969 kubelet[759]: E1007 13:37:22.882248     759 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308242882082161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:37:22 ha-362969 kubelet[759]: E1007 13:37:22.882278     759 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308242882082161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:37:26 ha-362969 kubelet[759]: I1007 13:37:26.808374     759 scope.go:117] "RemoveContainer" containerID="37c605aef5be2cfc31e3de00d76a4dd1c0bc93bbe009748d3ef6d76cb537aff3"
	Oct 07 13:37:32 ha-362969 kubelet[759]: E1007 13:37:32.369766     759 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-362969?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Oct 07 13:37:32 ha-362969 kubelet[759]: E1007 13:37:32.884590     759 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308252884064558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:37:32 ha-362969 kubelet[759]: E1007 13:37:32.884625     759 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308252884064558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:37:42 ha-362969 kubelet[759]: E1007 13:37:42.370648     759 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-362969?timeout=10s\": context deadline exceeded"
	Oct 07 13:37:42 ha-362969 kubelet[759]: E1007 13:37:42.889665     759 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308262889423663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:37:42 ha-362969 kubelet[759]: E1007 13:37:42.889703     759 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308262889423663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:37:52 ha-362969 kubelet[759]: E1007 13:37:52.371956     759 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-362969?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Oct 07 13:37:52 ha-362969 kubelet[759]: E1007 13:37:52.890983     759 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308272890780717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:37:52 ha-362969 kubelet[759]: E1007 13:37:52.891020     759 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728308272890780717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 13:38:02 ha-362969 kubelet[759]: E1007 13:38:02.372598     759 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-362969?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-362969 -n ha-362969
helpers_test.go:261: (dbg) Run:  kubectl --context ha-362969 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (128.66s)

                                                
                                    

Test pass (295/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.14
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.1/json-events 5.68
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.54
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 194.48
31 TestAddons/serial/GCPAuth/Namespaces 0.19
34 TestAddons/parallel/Registry 16.98
36 TestAddons/parallel/InspektorGadget 11.76
39 TestAddons/parallel/CSI 64.07
40 TestAddons/parallel/Headlamp 17.09
41 TestAddons/parallel/CloudSpanner 6.65
42 TestAddons/parallel/LocalPath 8.55
43 TestAddons/parallel/NvidiaDevicePlugin 6.62
44 TestAddons/parallel/Yakd 11.82
45 TestAddons/StoppedEnableDisable 12.26
46 TestCertOptions 38.8
47 TestCertExpiration 330.8
49 TestForceSystemdFlag 35.63
50 TestForceSystemdEnv 40.91
56 TestErrorSpam/setup 30.61
57 TestErrorSpam/start 0.78
58 TestErrorSpam/status 1.07
59 TestErrorSpam/pause 1.94
60 TestErrorSpam/unpause 1.81
61 TestErrorSpam/stop 1.49
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 45.88
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 27.51
68 TestFunctional/serial/KubeContext 0.07
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.11
73 TestFunctional/serial/CacheCmd/cache/add_local 1.45
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.27
78 TestFunctional/serial/CacheCmd/cache/delete 0.13
79 TestFunctional/serial/MinikubeKubectlCmd 0.16
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
81 TestFunctional/serial/ExtraConfig 40
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.73
84 TestFunctional/serial/LogsFileCmd 1.79
85 TestFunctional/serial/InvalidService 4.5
87 TestFunctional/parallel/ConfigCmd 0.56
88 TestFunctional/parallel/DashboardCmd 10.81
89 TestFunctional/parallel/DryRun 0.44
90 TestFunctional/parallel/InternationalLanguage 0.2
91 TestFunctional/parallel/StatusCmd 1.03
95 TestFunctional/parallel/ServiceCmdConnect 12.76
96 TestFunctional/parallel/AddonsCmd 0.32
97 TestFunctional/parallel/PersistentVolumeClaim 26.46
99 TestFunctional/parallel/SSHCmd 0.66
100 TestFunctional/parallel/CpCmd 2.19
102 TestFunctional/parallel/FileSync 0.29
103 TestFunctional/parallel/CertSync 2.19
107 TestFunctional/parallel/NodeLabels 0.09
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
111 TestFunctional/parallel/License 0.22
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.57
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.43
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.22
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
125 TestFunctional/parallel/ProfileCmd/profile_list 0.42
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
127 TestFunctional/parallel/ServiceCmd/List 0.68
128 TestFunctional/parallel/MountCmd/any-port 9.53
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.48
131 TestFunctional/parallel/ServiceCmd/Format 0.47
132 TestFunctional/parallel/ServiceCmd/URL 0.46
133 TestFunctional/parallel/MountCmd/specific-port 2.39
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.05
135 TestFunctional/parallel/Version/short 0.08
136 TestFunctional/parallel/Version/components 1.2
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.36
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.84
142 TestFunctional/parallel/ImageCommands/Setup 0.83
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.67
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.09
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.33
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.54
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.83
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 173.5
160 TestMultiControlPlane/serial/DeployApp 8.69
161 TestMultiControlPlane/serial/PingHostFromPods 1.69
162 TestMultiControlPlane/serial/AddWorkerNode 64.88
163 TestMultiControlPlane/serial/NodeLabels 0.11
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.02
165 TestMultiControlPlane/serial/CopyFile 19.26
166 TestMultiControlPlane/serial/StopSecondaryNode 12.79
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
168 TestMultiControlPlane/serial/RestartSecondaryNode 25.19
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.35
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 247.42
171 TestMultiControlPlane/serial/DeleteSecondaryNode 12.66
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
173 TestMultiControlPlane/serial/StopCluster 35.85
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
176 TestMultiControlPlane/serial/AddSecondaryNode 71.61
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.96
181 TestJSONOutput/start/Command 74.59
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.74
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.76
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.95
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.24
206 TestKicCustomNetwork/create_custom_network 39.32
207 TestKicCustomNetwork/use_default_bridge_network 36.46
208 TestKicExistingNetwork 34.23
209 TestKicCustomSubnet 34.02
210 TestKicStaticIP 30.89
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 64.2
215 TestMountStart/serial/StartWithMountFirst 7.16
216 TestMountStart/serial/VerifyMountFirst 0.27
217 TestMountStart/serial/StartWithMountSecond 6.54
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.64
220 TestMountStart/serial/VerifyMountPostDelete 0.26
221 TestMountStart/serial/Stop 1.22
222 TestMountStart/serial/RestartStopped 7.73
223 TestMountStart/serial/VerifyMountPostStop 0.27
226 TestMultiNode/serial/FreshStart2Nodes 112.91
227 TestMultiNode/serial/DeployApp2Nodes 8.15
228 TestMultiNode/serial/PingHostFrom2Pods 1.04
229 TestMultiNode/serial/AddNode 58.87
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.68
232 TestMultiNode/serial/CopyFile 10.08
233 TestMultiNode/serial/StopNode 2.26
234 TestMultiNode/serial/StartAfterStop 9.78
235 TestMultiNode/serial/RestartKeepsNodes 80.94
236 TestMultiNode/serial/DeleteNode 5.32
237 TestMultiNode/serial/StopMultiNode 23.88
238 TestMultiNode/serial/RestartMultiNode 49.5
239 TestMultiNode/serial/ValidateNameConflict 39.9
244 TestPreload 128.91
246 TestScheduledStopUnix 109.46
249 TestInsufficientStorage 10.38
250 TestRunningBinaryUpgrade 66.26
252 TestKubernetesUpgrade 386.18
253 TestMissingContainerUpgrade 153.48
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/StartWithK8s 39.85
257 TestNoKubernetes/serial/StartWithStopK8s 10.25
258 TestNoKubernetes/serial/Start 9.2
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
260 TestNoKubernetes/serial/ProfileList 1.25
261 TestNoKubernetes/serial/Stop 1.29
262 TestNoKubernetes/serial/StartNoArgs 7.88
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.39
264 TestStoppedBinaryUpgrade/Setup 0.92
265 TestStoppedBinaryUpgrade/Upgrade 76.2
266 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
275 TestPause/serial/Start 49.99
276 TestPause/serial/SecondStartNoReconfiguration 22.48
277 TestPause/serial/Pause 0.87
278 TestPause/serial/VerifyStatus 0.43
279 TestPause/serial/Unpause 0.76
280 TestPause/serial/PauseAgain 0.86
281 TestPause/serial/DeletePaused 2.87
282 TestPause/serial/VerifyDeletedResources 0.37
290 TestNetworkPlugins/group/false 6.26
295 TestStartStop/group/old-k8s-version/serial/FirstStart 162.51
296 TestStartStop/group/old-k8s-version/serial/DeployApp 10.6
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.21
298 TestStartStop/group/old-k8s-version/serial/Stop 12.41
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
300 TestStartStop/group/old-k8s-version/serial/SecondStart 143.38
302 TestStartStop/group/no-preload/serial/FirstStart 68.22
303 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
304 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
305 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
306 TestStartStop/group/old-k8s-version/serial/Pause 3.08
307 TestStartStop/group/no-preload/serial/DeployApp 10.45
309 TestStartStop/group/embed-certs/serial/FirstStart 81.13
310 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.43
311 TestStartStop/group/no-preload/serial/Stop 12.43
312 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.28
313 TestStartStop/group/no-preload/serial/SecondStart 330.48
314 TestStartStop/group/embed-certs/serial/DeployApp 10.34
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.22
316 TestStartStop/group/embed-certs/serial/Stop 12.04
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
318 TestStartStop/group/embed-certs/serial/SecondStart 278.3
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
322 TestStartStop/group/no-preload/serial/Pause 3.13
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.92
325 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
327 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
328 TestStartStop/group/embed-certs/serial/Pause 3.91
330 TestStartStop/group/newest-cni/serial/FirstStart 39.91
331 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.38
333 TestStartStop/group/newest-cni/serial/Stop 1.27
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
335 TestStartStop/group/newest-cni/serial/SecondStart 16.65
336 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.51
337 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
340 TestStartStop/group/newest-cni/serial/Pause 3.26
341 TestNetworkPlugins/group/auto/Start 80.64
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.4
343 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.24
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
345 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 271.48
346 TestNetworkPlugins/group/auto/KubeletFlags 0.29
347 TestNetworkPlugins/group/auto/NetCatPod 10.28
348 TestNetworkPlugins/group/auto/DNS 0.18
349 TestNetworkPlugins/group/auto/Localhost 0.21
350 TestNetworkPlugins/group/auto/HairPin 0.16
351 TestNetworkPlugins/group/kindnet/Start 47.58
352 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
353 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
354 TestNetworkPlugins/group/kindnet/NetCatPod 11.25
355 TestNetworkPlugins/group/kindnet/DNS 0.18
356 TestNetworkPlugins/group/kindnet/Localhost 0.16
357 TestNetworkPlugins/group/kindnet/HairPin 0.16
358 TestNetworkPlugins/group/calico/Start 57.49
359 TestNetworkPlugins/group/calico/ControllerPod 6.01
360 TestNetworkPlugins/group/calico/KubeletFlags 0.31
361 TestNetworkPlugins/group/calico/NetCatPod 11.28
362 TestNetworkPlugins/group/calico/DNS 0.4
363 TestNetworkPlugins/group/calico/Localhost 0.2
364 TestNetworkPlugins/group/calico/HairPin 0.19
365 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
366 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
367 TestNetworkPlugins/group/custom-flannel/Start 69.92
368 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.36
369 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.03
370 TestNetworkPlugins/group/enable-default-cni/Start 74.29
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.28
373 TestNetworkPlugins/group/custom-flannel/DNS 0.18
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.31
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
381 TestNetworkPlugins/group/flannel/Start 63.84
382 TestNetworkPlugins/group/bridge/Start 82.68
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
385 TestNetworkPlugins/group/flannel/NetCatPod 10.27
386 TestNetworkPlugins/group/flannel/DNS 0.18
387 TestNetworkPlugins/group/flannel/Localhost 0.16
388 TestNetworkPlugins/group/flannel/HairPin 0.15
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.41
390 TestNetworkPlugins/group/bridge/NetCatPod 12.45
391 TestNetworkPlugins/group/bridge/DNS 0.17
392 TestNetworkPlugins/group/bridge/Localhost 0.15
393 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (6.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-755816 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-755816 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.136285275s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1007 13:03:50.591651 1694126 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1007 13:03:50.591742 1694126 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-1688750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-755816
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-755816: exit status 85 (67.231521ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-755816 | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC |          |
	|         | -p download-only-755816        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 13:03:44
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 13:03:44.502292 1694131 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:03:44.502499 1694131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:03:44.502526 1694131 out.go:358] Setting ErrFile to fd 2...
	I1007 13:03:44.502547 1694131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:03:44.502844 1694131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-1688750/.minikube/bin
	W1007 13:03:44.503019 1694131 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18424-1688750/.minikube/config/config.json: open /home/jenkins/minikube-integration/18424-1688750/.minikube/config/config.json: no such file or directory
	I1007 13:03:44.503520 1694131 out.go:352] Setting JSON to true
	I1007 13:03:44.504460 1694131 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":96376,"bootTime":1728209849,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1007 13:03:44.504535 1694131 start.go:139] virtualization:  
	I1007 13:03:44.506790 1694131 out.go:97] [download-only-755816] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1007 13:03:44.506937 1694131 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/18424-1688750/.minikube/cache/preloaded-tarball: no such file or directory
	I1007 13:03:44.506973 1694131 notify.go:220] Checking for updates...
	I1007 13:03:44.508629 1694131 out.go:169] MINIKUBE_LOCATION=18424
	I1007 13:03:44.510241 1694131 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:03:44.512050 1694131 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18424-1688750/kubeconfig
	I1007 13:03:44.513840 1694131 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-1688750/.minikube
	I1007 13:03:44.515838 1694131 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1007 13:03:44.519215 1694131 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1007 13:03:44.519447 1694131 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:03:44.551762 1694131 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 13:03:44.551878 1694131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:03:44.604551 1694131 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-07 13:03:44.594639495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:03:44.604659 1694131 docker.go:318] overlay module found
	I1007 13:03:44.606445 1694131 out.go:97] Using the docker driver based on user configuration
	I1007 13:03:44.606472 1694131 start.go:297] selected driver: docker
	I1007 13:03:44.606479 1694131 start.go:901] validating driver "docker" against <nil>
	I1007 13:03:44.606573 1694131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:03:44.655588 1694131 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-07 13:03:44.646513865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:03:44.655818 1694131 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 13:03:44.656104 1694131 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1007 13:03:44.656268 1694131 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 13:03:44.657798 1694131 out.go:169] Using Docker driver with root privileges
	I1007 13:03:44.659821 1694131 cni.go:84] Creating CNI manager for ""
	I1007 13:03:44.659878 1694131 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 13:03:44.659892 1694131 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 13:03:44.659978 1694131 start.go:340] cluster config:
	{Name:download-only-755816 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-755816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:03:44.661403 1694131 out.go:97] Starting "download-only-755816" primary control-plane node in "download-only-755816" cluster
	I1007 13:03:44.661420 1694131 cache.go:121] Beginning downloading kic base image for docker with crio
	I1007 13:03:44.662629 1694131 out.go:97] Pulling base image v0.0.45-1727731891-master ...
	I1007 13:03:44.662653 1694131 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 13:03:44.662803 1694131 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 13:03:44.677514 1694131 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1007 13:03:44.677684 1694131 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1007 13:03:44.677791 1694131 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1007 13:03:44.721768 1694131 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1007 13:03:44.721802 1694131 cache.go:56] Caching tarball of preloaded images
	I1007 13:03:44.721950 1694131 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 13:03:44.723492 1694131 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1007 13:03:44.723513 1694131 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1007 13:03:44.809361 1694131 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/18424-1688750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1007 13:03:48.816307 1694131 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1007 13:03:48.816435 1694131 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/18424-1688750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1007 13:03:49.145892 1694131 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	
	
	* The control-plane node download-only-755816 host does not exist
	  To start a cluster, run: "minikube start -p download-only-755816"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-755816
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-521885 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-521885 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.681027899s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1007 13:03:56.679894 1694126 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1007 13:03:56.679933 1694126 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-1688750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-521885
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-521885: exit status 85 (71.34269ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-755816 | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC |                     |
	|         | -p download-only-755816        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC | 07 Oct 24 13:03 UTC |
	| delete  | -p download-only-755816        | download-only-755816 | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC | 07 Oct 24 13:03 UTC |
	| start   | -o=json --download-only        | download-only-521885 | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC |                     |
	|         | -p download-only-521885        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 13:03:51
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 13:03:51.048999 1694331 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:03:51.049239 1694331 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:03:51.049253 1694331 out.go:358] Setting ErrFile to fd 2...
	I1007 13:03:51.049258 1694331 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:03:51.049542 1694331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-1688750/.minikube/bin
	I1007 13:03:51.049997 1694331 out.go:352] Setting JSON to true
	I1007 13:03:51.050924 1694331 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":96382,"bootTime":1728209849,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1007 13:03:51.050993 1694331 start.go:139] virtualization:  
	I1007 13:03:51.053350 1694331 out.go:97] [download-only-521885] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 13:03:51.053602 1694331 notify.go:220] Checking for updates...
	I1007 13:03:51.056167 1694331 out.go:169] MINIKUBE_LOCATION=18424
	I1007 13:03:51.057424 1694331 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:03:51.058645 1694331 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18424-1688750/kubeconfig
	I1007 13:03:51.060005 1694331 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-1688750/.minikube
	I1007 13:03:51.061292 1694331 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1007 13:03:51.063962 1694331 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1007 13:03:51.064280 1694331 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:03:51.085034 1694331 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 13:03:51.085163 1694331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:03:51.145552 1694331 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-07 13:03:51.135821016 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:03:51.145668 1694331 docker.go:318] overlay module found
	I1007 13:03:51.147384 1694331 out.go:97] Using the docker driver based on user configuration
	I1007 13:03:51.147414 1694331 start.go:297] selected driver: docker
	I1007 13:03:51.147423 1694331 start.go:901] validating driver "docker" against <nil>
	I1007 13:03:51.147546 1694331 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:03:51.197404 1694331 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-07 13:03:51.188349424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:03:51.197613 1694331 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 13:03:51.197895 1694331 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1007 13:03:51.198059 1694331 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 13:03:51.200898 1694331 out.go:169] Using Docker driver with root privileges
	I1007 13:03:51.202384 1694331 cni.go:84] Creating CNI manager for ""
	I1007 13:03:51.202448 1694331 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 13:03:51.202462 1694331 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 13:03:51.202562 1694331 start.go:340] cluster config:
	{Name:download-only-521885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-521885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:03:51.203723 1694331 out.go:97] Starting "download-only-521885" primary control-plane node in "download-only-521885" cluster
	I1007 13:03:51.203746 1694331 cache.go:121] Beginning downloading kic base image for docker with crio
	I1007 13:03:51.205093 1694331 out.go:97] Pulling base image v0.0.45-1727731891-master ...
	I1007 13:03:51.205117 1694331 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:03:51.205226 1694331 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 13:03:51.220272 1694331 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1007 13:03:51.220409 1694331 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1007 13:03:51.220434 1694331 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1007 13:03:51.220440 1694331 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1007 13:03:51.220451 1694331 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1007 13:03:51.269781 1694331 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1007 13:03:51.269813 1694331 cache.go:56] Caching tarball of preloaded images
	I1007 13:03:51.269982 1694331 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:03:51.272808 1694331 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1007 13:03:51.272846 1694331 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I1007 13:03:51.359786 1694331 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:8285fc512c7462f100de137f91fcd0ae -> /home/jenkins/minikube-integration/18424-1688750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1007 13:03:55.167454 1694331 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I1007 13:03:55.167605 1694331 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/18424-1688750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I1007 13:03:56.014044 1694331 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 13:03:56.014415 1694331 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/download-only-521885/config.json ...
	I1007 13:03:56.014450 1694331 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/download-only-521885/config.json: {Name:mk8b10ef25b6cb4418aad1b2e4288fd556aa46a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:03:56.015181 1694331 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:03:56.015368 1694331 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18424-1688750/.minikube/cache/linux/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-521885 host does not exist
	  To start a cluster, run: "minikube start -p download-only-521885"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-521885
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
I1007 13:03:57.913750 1694126 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-354806 --alsologtostderr --binary-mirror http://127.0.0.1:38505 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-354806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-354806
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:934: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-779469
addons_test.go:934: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-779469: exit status 85 (86.009822ms)

                                                
                                                
-- stdout --
	* Profile "addons-779469" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-779469"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:945: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-779469
addons_test.go:945: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-779469: exit status 85 (88.918419ms)

                                                
                                                
-- stdout --
	* Profile "addons-779469" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-779469"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (194.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-779469 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-779469 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m14.484039531s)
--- PASS: TestAddons/Setup (194.48s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-779469 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-779469 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 12.132557ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-b8457" [37368b21-bd4d-4d7c-b2ee-31f62690e0b7] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.055341771s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-p4tjk" [7f540d5b-5976-4e89-b2f2-c934d659d3f3] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004029848s
addons_test.go:331: (dbg) Run:  kubectl --context addons-779469 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-779469 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-779469 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.893316641s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-779469 ip
2024/10/07 13:15:41 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-779469 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.98s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jxfvh" [a2e4d1bc-d129-43a3-96c8-d36359fbc4ea] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004077728s
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-779469 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-779469 addons disable inspektor-gadget --alsologtostderr -v=1: (5.751036853s)
--- PASS: TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (64.07s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1007 13:16:00.997473 1694126 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1007 13:16:01.008830 1694126 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1007 13:16:01.008868 1694126 kapi.go:107] duration metric: took 11.409324ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 11.42018ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-779469 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-779469 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1205defd-7d58-4df5-9355-72b54cfb22a4] Pending
helpers_test.go:344: "task-pv-pod" [1205defd-7d58-4df5-9355-72b54cfb22a4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1205defd-7d58-4df5-9355-72b54cfb22a4] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.007544986s
addons_test.go:511: (dbg) Run:  kubectl --context addons-779469 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-779469 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-779469 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-779469 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-779469 delete pod task-pv-pod: (1.04786422s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-779469 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-779469 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-779469 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6a7b81ee-564f-4a8f-bdb0-3d174a23d16b] Pending
helpers_test.go:344: "task-pv-pod-restore" [6a7b81ee-564f-4a8f-bdb0-3d174a23d16b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6a7b81ee-564f-4a8f-bdb0-3d174a23d16b] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004144241s
addons_test.go:553: (dbg) Run:  kubectl --context addons-779469 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-779469 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-779469 delete volumesnapshot new-snapshot-demo
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-779469 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-779469 addons disable volumesnapshots --alsologtostderr -v=1: (1.069068078s)
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-779469 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-779469 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.791082651s)
--- PASS: TestAddons/parallel/CSI (64.07s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-779469 --alsologtostderr -v=1
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-p6tpj" [b1fd09bc-38eb-47f9-89b0-02d386258ecc] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-p6tpj" [b1fd09bc-38eb-47f9-89b0-02d386258ecc] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-p6tpj" [b1fd09bc-38eb-47f9-89b0-02d386258ecc] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-p6tpj" [b1fd09bc-38eb-47f9-89b0-02d386258ecc] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003860752s
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-779469 addons disable headlamp --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-779469 addons disable headlamp --alsologtostderr -v=1: (6.12426007s)
--- PASS: TestAddons/parallel/Headlamp (17.09s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-gbp2b" [7758d073-c2fb-4e9d-951a-9d6125345009] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004697181s
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-779469 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.55s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:883: (dbg) Run:  kubectl --context addons-779469 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:889: (dbg) Run:  kubectl --context addons-779469 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:893: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-779469 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:896: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ff999da1-a2e3-4ac2-be69-c3c86c4470d2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ff999da1-a2e3-4ac2-be69-c3c86c4470d2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ff999da1-a2e3-4ac2-be69-c3c86c4470d2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:896: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.006941928s
addons_test.go:901: (dbg) Run:  kubectl --context addons-779469 get pvc test-pvc -o=json
addons_test.go:910: (dbg) Run:  out/minikube-linux-arm64 -p addons-779469 ssh "cat /opt/local-path-provisioner/pvc-ef2e515d-a253-470e-a4c5-ae9b384f01de_default_test-pvc/file1"
addons_test.go:922: (dbg) Run:  kubectl --context addons-779469 delete pod test-local-path
addons_test.go:926: (dbg) Run:  kubectl --context addons-779469 delete pvc test-pvc
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-779469 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.55s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:958: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mgxtx" [981684ce-573b-4c82-a5d9-19d8c41421ce] Running
addons_test.go:958: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003845051s
addons_test.go:961: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-779469
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.62s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-zs92p" [e9bfb061-b365-4e4e-9c4c-a63c42c528ee] Running
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003665668s
addons_test.go:973: (dbg) Run:  out/minikube-linux-arm64 -p addons-779469 addons disable yakd --alsologtostderr -v=1
addons_test.go:973: (dbg) Done: out/minikube-linux-arm64 -p addons-779469 addons disable yakd --alsologtostderr -v=1: (5.815342621s)
--- PASS: TestAddons/parallel/Yakd (11.82s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-779469
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-779469: (11.976632364s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-779469
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-779469
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-779469
--- PASS: TestAddons/StoppedEnableDisable (12.26s)

                                                
                                    
x
+
TestCertOptions (38.8s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-272171 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-272171 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.161410234s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-272171 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-272171 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-272171 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-272171" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-272171
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-272171: (1.95820824s)
--- PASS: TestCertOptions (38.80s)

                                                
                                    
x
+
TestCertExpiration (330.8s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-446561 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-446561 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.200701533s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-446561 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-446561 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (1m48.975734315s)
helpers_test.go:175: Cleaning up "cert-expiration-446561" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-446561
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-446561: (2.620585062s)
--- PASS: TestCertExpiration (330.80s)

                                                
                                    
x
+
TestForceSystemdFlag (35.63s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-590224 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1007 14:02:54.725619 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-590224 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.390009613s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-590224 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-590224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-590224
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-590224: (2.806324487s)
--- PASS: TestForceSystemdFlag (35.63s)

                                                
                                    
x
+
TestForceSystemdEnv (40.91s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-883727 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-883727 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.32407354s)
helpers_test.go:175: Cleaning up "force-systemd-env-883727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-883727
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-883727: (2.588050238s)
--- PASS: TestForceSystemdEnv (40.91s)

                                                
                                    
x
+
TestErrorSpam/setup (30.61s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-427564 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-427564 --driver=docker  --container-runtime=crio
E1007 13:22:13.832323 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:22:13.839044 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:22:13.850440 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:22:13.871898 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:22:13.913288 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:22:13.994666 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:22:14.156167 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:22:14.477925 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:22:15.120198 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:22:16.401929 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:22:18.963649 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:22:24.085146 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-427564 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-427564 --driver=docker  --container-runtime=crio: (30.606830824s)
--- PASS: TestErrorSpam/setup (30.61s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-427564 --log_dir /tmp/nospam-427564 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-427564 --log_dir /tmp/nospam-427564 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-427564 --log_dir /tmp/nospam-427564 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-427564 --log_dir /tmp/nospam-427564 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-427564 --log_dir /tmp/nospam-427564 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-427564 --log_dir /tmp/nospam-427564 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-427564 --log_dir /tmp/nospam-427564 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-427564 --log_dir /tmp/nospam-427564 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-427564 --log_dir /tmp/nospam-427564 pause
--- PASS: TestErrorSpam/pause (1.94s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-427564 --log_dir /tmp/nospam-427564 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-427564 --log_dir /tmp/nospam-427564 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-427564 --log_dir /tmp/nospam-427564 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-427564 --log_dir /tmp/nospam-427564 stop
E1007 13:22:34.327055 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-427564 --log_dir /tmp/nospam-427564 stop: (1.277338785s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-427564 --log_dir /tmp/nospam-427564 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-427564 --log_dir /tmp/nospam-427564 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/18424-1688750/.minikube/files/etc/test/nested/copy/1694126/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.88s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-730125 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1007 13:22:54.808778 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-730125 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (45.880118329s)
--- PASS: TestFunctional/serial/StartWithProxy (45.88s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.51s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1007 13:23:25.397490 1694126 config.go:182] Loaded profile config "functional-730125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-730125 --alsologtostderr -v=8
E1007 13:23:35.770095 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-730125 --alsologtostderr -v=8: (27.497924218s)
functional_test.go:663: soft start took 27.505281091s for "functional-730125" cluster.
I1007 13:23:52.902545 1694126 config.go:182] Loaded profile config "functional-730125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (27.51s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-730125 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-730125 cache add registry.k8s.io/pause:3.1: (1.456703218s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-730125 cache add registry.k8s.io/pause:3.3: (1.342129952s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-730125 cache add registry.k8s.io/pause:latest: (1.314562095s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-730125 /tmp/TestFunctionalserialCacheCmdcacheadd_local2454803095/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 cache add minikube-local-cache-test:functional-730125
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 cache delete minikube-local-cache-test:functional-730125
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-730125
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-730125 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (289.433641ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-730125 cache reload: (1.342610609s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 kubectl -- --context functional-730125 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-730125 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-730125 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-730125 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.000511502s)
functional_test.go:761: restart took 40.000602281s for "functional-730125" cluster.
I1007 13:24:41.743448 1694126 config.go:182] Loaded profile config "functional-730125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (40.00s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-730125 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-730125 logs: (1.729910983s)
--- PASS: TestFunctional/serial/LogsCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 logs --file /tmp/TestFunctionalserialLogsFileCmd3722635613/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-730125 logs --file /tmp/TestFunctionalserialLogsFileCmd3722635613/001/logs.txt: (1.78616284s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.79s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.5s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-730125 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-730125
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-730125: exit status 115 (656.697548ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31012 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-730125 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.50s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-730125 config get cpus: exit status 14 (88.077382ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-730125 config get cpus: exit status 14 (98.581557ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-730125 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-730125 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1727213: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.81s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-730125 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-730125 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (184.735784ms)

                                                
                                                
-- stdout --
	* [functional-730125] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-1688750/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-1688750/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:25:23.601887 1726912 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:25:23.602013 1726912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:25:23.602021 1726912 out.go:358] Setting ErrFile to fd 2...
	I1007 13:25:23.602027 1726912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:25:23.602307 1726912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-1688750/.minikube/bin
	I1007 13:25:23.602696 1726912 out.go:352] Setting JSON to false
	I1007 13:25:23.604025 1726912 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":97675,"bootTime":1728209849,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1007 13:25:23.604112 1726912 start.go:139] virtualization:  
	I1007 13:25:23.607432 1726912 out.go:177] * [functional-730125] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 13:25:23.610824 1726912 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:25:23.610924 1726912 notify.go:220] Checking for updates...
	I1007 13:25:23.617002 1726912 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:25:23.619704 1726912 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-1688750/kubeconfig
	I1007 13:25:23.622406 1726912 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-1688750/.minikube
	I1007 13:25:23.626250 1726912 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 13:25:23.629030 1726912 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:25:23.632228 1726912 config.go:182] Loaded profile config "functional-730125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:25:23.632894 1726912 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:25:23.661607 1726912 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 13:25:23.661994 1726912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:25:23.712762 1726912 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-07 13:25:23.703030963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:25:23.712879 1726912 docker.go:318] overlay module found
	I1007 13:25:23.715876 1726912 out.go:177] * Using the docker driver based on existing profile
	I1007 13:25:23.718598 1726912 start.go:297] selected driver: docker
	I1007 13:25:23.718624 1726912 start.go:901] validating driver "docker" against &{Name:functional-730125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-730125 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:25:23.718758 1726912 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:25:23.722059 1726912 out.go:201] 
	W1007 13:25:23.724619 1726912 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1007 13:25:23.727402 1726912 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-730125 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-730125 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-730125 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (199.923069ms)

                                                
                                                
-- stdout --
	* [functional-730125] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-1688750/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-1688750/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:25:23.408838 1726868 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:25:23.409022 1726868 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:25:23.409032 1726868 out.go:358] Setting ErrFile to fd 2...
	I1007 13:25:23.409038 1726868 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:25:23.409464 1726868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-1688750/.minikube/bin
	I1007 13:25:23.409888 1726868 out.go:352] Setting JSON to false
	I1007 13:25:23.411295 1726868 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":97675,"bootTime":1728209849,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1007 13:25:23.411474 1726868 start.go:139] virtualization:  
	I1007 13:25:23.415675 1726868 out.go:177] * [functional-730125] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1007 13:25:23.418470 1726868 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:25:23.418650 1726868 notify.go:220] Checking for updates...
	I1007 13:25:23.423974 1726868 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:25:23.426776 1726868 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-1688750/kubeconfig
	I1007 13:25:23.429418 1726868 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-1688750/.minikube
	I1007 13:25:23.431995 1726868 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 13:25:23.434800 1726868 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:25:23.438007 1726868 config.go:182] Loaded profile config "functional-730125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:25:23.439482 1726868 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:25:23.469943 1726868 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 13:25:23.470086 1726868 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:25:23.525996 1726868 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-07 13:25:23.514949284 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:25:23.526132 1726868 docker.go:318] overlay module found
	I1007 13:25:23.530869 1726868 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1007 13:25:23.533572 1726868 start.go:297] selected driver: docker
	I1007 13:25:23.533601 1726868 start.go:901] validating driver "docker" against &{Name:functional-730125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-730125 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:25:23.533742 1726868 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:25:23.537075 1726868 out.go:201] 
	W1007 13:25:23.539861 1726868 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1007 13:25:23.542617 1726868 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-730125 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-730125 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-tszsc" [8414d5b1-7f8c-4f29-84b9-b99742f4f8b5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-tszsc" [8414d5b1-7f8c-4f29-84b9-b99742f4f8b5] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.004692371s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32137
functional_test.go:1675: http://192.168.49.2:32137: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-tszsc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32137
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f90e300b-348a-4d33-a51a-b8a27a3ec9ff] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004635506s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-730125 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-730125 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-730125 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-730125 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c74884b0-a548-4c15-8115-a05e66772ba6] Pending
helpers_test.go:344: "sp-pod" [c74884b0-a548-4c15-8115-a05e66772ba6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c74884b0-a548-4c15-8115-a05e66772ba6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003402924s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-730125 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-730125 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-730125 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d4fffcae-3f26-4e57-af8e-2a53d93cc7c1] Pending
helpers_test.go:344: "sp-pod" [d4fffcae-3f26-4e57-af8e-2a53d93cc7c1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00306042s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-730125 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.46s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh -n functional-730125 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 cp functional-730125:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4122243224/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh -n functional-730125 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh -n functional-730125 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1694126/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "sudo cat /etc/test/nested/copy/1694126/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1694126.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "sudo cat /etc/ssl/certs/1694126.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1694126.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "sudo cat /usr/share/ca-certificates/1694126.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/16941262.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "sudo cat /etc/ssl/certs/16941262.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/16941262.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "sudo cat /usr/share/ca-certificates/16941262.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-730125 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-730125 ssh "sudo systemctl is-active docker": exit status 1 (279.396877ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-730125 ssh "sudo systemctl is-active containerd": exit status 1 (281.15027ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-730125 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-730125 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-730125 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-730125 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1724649: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-730125 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-730125 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [fdf5a324-8157-48da-a1f8-e2861ccd61dd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [fdf5a324-8157-48da-a1f8-e2861ccd61dd] Running
E1007 13:24:57.691604 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004904473s
I1007 13:25:00.078916 1694126 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-730125 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.8.135 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-730125 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-730125 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-730125 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-tjqwc" [4dd9c1ba-dca3-4225-bcb9-b61b46486920] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-tjqwc" [4dd9c1ba-dca3-4225-bcb9-b61b46486920] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.009354721s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "354.6378ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "63.316522ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "340.728022ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "105.44643ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-730125 /tmp/TestFunctionalparallelMountCmdany-port4017733557/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728307519810441034" to /tmp/TestFunctionalparallelMountCmdany-port4017733557/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728307519810441034" to /tmp/TestFunctionalparallelMountCmdany-port4017733557/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728307519810441034" to /tmp/TestFunctionalparallelMountCmdany-port4017733557/001/test-1728307519810441034
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-730125 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (442.300897ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1007 13:25:20.253058 1694126 retry.go:31] will retry after 530.654866ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  7 13:25 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  7 13:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  7 13:25 test-1728307519810441034
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh cat /mount-9p/test-1728307519810441034
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-730125 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8974853a-c149-4631-8bdd-1df251541f0f] Pending
helpers_test.go:344: "busybox-mount" [8974853a-c149-4631-8bdd-1df251541f0f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8974853a-c149-4631-8bdd-1df251541f0f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8974853a-c149-4631-8bdd-1df251541f0f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.008591212s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-730125 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-730125 /tmp/TestFunctionalparallelMountCmdany-port4017733557/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 service list -o json
functional_test.go:1494: Took "512.583056ms" to run "out/minikube-linux-arm64 -p functional-730125 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32216
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32216
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-730125 /tmp/TestFunctionalparallelMountCmdspecific-port2177119102/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-730125 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (557.060843ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1007 13:25:29.899735 1694126 retry.go:31] will retry after 632.771078ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-730125 /tmp/TestFunctionalparallelMountCmdspecific-port2177119102/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-730125 ssh "sudo umount -f /mount-9p": exit status 1 (323.05378ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-730125 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-730125 /tmp/TestFunctionalparallelMountCmdspecific-port2177119102/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-730125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2313994674/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-730125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2313994674/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-730125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2313994674/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-730125 ssh "findmnt -T" /mount1: exit status 1 (705.716422ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1007 13:25:32.447703 1694126 retry.go:31] will retry after 324.917718ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-730125 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-730125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2313994674/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-730125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2313994674/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-730125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2313994674/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-730125 version -o=json --components: (1.203601093s)
--- PASS: TestFunctional/parallel/Version/components (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-730125 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-730125
localhost/kicbase/echo-server:functional-730125
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-730125 image ls --format short --alsologtostderr:
I1007 13:25:42.081897 1729744 out.go:345] Setting OutFile to fd 1 ...
I1007 13:25:42.082102 1729744 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 13:25:42.082109 1729744 out.go:358] Setting ErrFile to fd 2...
I1007 13:25:42.082114 1729744 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 13:25:42.082413 1729744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-1688750/.minikube/bin
I1007 13:25:42.083414 1729744 config.go:182] Loaded profile config "functional-730125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 13:25:42.083623 1729744 config.go:182] Loaded profile config "functional-730125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 13:25:42.084249 1729744 cli_runner.go:164] Run: docker container inspect functional-730125 --format={{.State.Status}}
I1007 13:25:42.110887 1729744 ssh_runner.go:195] Run: systemctl --version
I1007 13:25:42.110955 1729744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-730125
I1007 13:25:42.136896 1729744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38276 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/functional-730125/id_rsa Username:docker}
I1007 13:25:42.232821 1729744 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-730125 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 7f8aa378bb47d | 67MB   |
| localhost/minikube-local-cache-test     | functional-730125  | 023ae35322ee9 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/kube-apiserver          | v1.31.1            | d3f53a98c0a9d | 92.6MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 279f381cb3736 | 86.9MB |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/library/nginx                 | alpine             | 577a23b5858b9 | 52.3MB |
| docker.io/library/nginx                 | latest             | 048e090385966 | 201MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| localhost/kicbase/echo-server           | functional-730125  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 24a140c548c07 | 96MB   |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-730125 image ls --format table --alsologtostderr:
I1007 13:25:42.417154 1729812 out.go:345] Setting OutFile to fd 1 ...
I1007 13:25:42.417364 1729812 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 13:25:42.417396 1729812 out.go:358] Setting ErrFile to fd 2...
I1007 13:25:42.417417 1729812 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 13:25:42.417686 1729812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-1688750/.minikube/bin
I1007 13:25:42.418377 1729812 config.go:182] Loaded profile config "functional-730125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 13:25:42.418549 1729812 config.go:182] Loaded profile config "functional-730125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 13:25:42.419050 1729812 cli_runner.go:164] Run: docker container inspect functional-730125 --format={{.State.Status}}
I1007 13:25:42.443371 1729812 ssh_runner.go:195] Run: systemctl --version
I1007 13:25:42.443425 1729812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-730125
I1007 13:25:42.473359 1729812 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38276 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/functional-730125/id_rsa Username:docker}
I1007 13:25:42.580072 1729812 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-730125 image ls --format json --alsologtostderr:
[{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"86930758"},{"id":"048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9","repoDigests":["docker.io/library/nginx@sha256:96c43ba316370e0c1d1810b9693e647cc62a172a842d77888c299f3944922491","docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"200984127"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854
ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690","registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5
064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67007814"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["reg
istry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"95951255"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha2
56:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-730125"],"size":"4788229"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"023ae35322ee9132c581a7ec8f3a45c55f7881e6e0cc6cb3fc3fd6360c806dbd","repoDigests":["localhost/minikube-local-cache-test@sha256:8384dfaada96695c7c55adbe4b36a5736c2fae0c
4ca3663452713e66a4ddceef"],"repoTags":["localhost/minikube-local-cache-test:functional-730125"],"size":"3330"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb","registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"92632544"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c",
"docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":["docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250","docker.io/library/nginx@sha256:d1f949a77b81762af560a6e8f3f2bc2817f1c575ede5a756749e3c5d459e6478"],"repoTags":["docker.io/library/nginx:alpine"],"size":"52254450"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-730125 image ls --format json --alsologtostderr:
I1007 13:25:42.428484 1729807 out.go:345] Setting OutFile to fd 1 ...
I1007 13:25:42.432601 1729807 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 13:25:42.432623 1729807 out.go:358] Setting ErrFile to fd 2...
I1007 13:25:42.432629 1729807 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 13:25:42.432999 1729807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-1688750/.minikube/bin
I1007 13:25:42.434119 1729807 config.go:182] Loaded profile config "functional-730125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 13:25:42.434315 1729807 config.go:182] Loaded profile config "functional-730125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 13:25:42.435501 1729807 cli_runner.go:164] Run: docker container inspect functional-730125 --format={{.State.Status}}
I1007 13:25:42.458664 1729807 ssh_runner.go:195] Run: systemctl --version
I1007 13:25:42.458731 1729807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-730125
I1007 13:25:42.489047 1729807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38276 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/functional-730125/id_rsa Username:docker}
I1007 13:25:42.588157 1729807 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-730125 image ls --format yaml --alsologtostderr:
- id: 577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests:
- docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
- docker.io/library/nginx@sha256:d1f949a77b81762af560a6e8f3f2bc2817f1c575ede5a756749e3c5d459e6478
repoTags:
- docker.io/library/nginx:alpine
size: "52254450"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
- registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "92632544"
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67007814"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 023ae35322ee9132c581a7ec8f3a45c55f7881e6e0cc6cb3fc3fd6360c806dbd
repoDigests:
- localhost/minikube-local-cache-test@sha256:8384dfaada96695c7c55adbe4b36a5736c2fae0c4ca3663452713e66a4ddceef
repoTags:
- localhost/minikube-local-cache-test:functional-730125
size: "3330"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-730125
size: "4788229"
- id: 048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9
repoDigests:
- docker.io/library/nginx@sha256:96c43ba316370e0c1d1810b9693e647cc62a172a842d77888c299f3944922491
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "200984127"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "86930758"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "95951255"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-730125 image ls --format yaml --alsologtostderr:
I1007 13:25:42.090674 1729745 out.go:345] Setting OutFile to fd 1 ...
I1007 13:25:42.090899 1729745 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 13:25:42.090928 1729745 out.go:358] Setting ErrFile to fd 2...
I1007 13:25:42.090958 1729745 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 13:25:42.091284 1729745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-1688750/.minikube/bin
I1007 13:25:42.092079 1729745 config.go:182] Loaded profile config "functional-730125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 13:25:42.092307 1729745 config.go:182] Loaded profile config "functional-730125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 13:25:42.092896 1729745 cli_runner.go:164] Run: docker container inspect functional-730125 --format={{.State.Status}}
I1007 13:25:42.122858 1729745 ssh_runner.go:195] Run: systemctl --version
I1007 13:25:42.122923 1729745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-730125
I1007 13:25:42.169092 1729745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38276 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/functional-730125/id_rsa Username:docker}
I1007 13:25:42.265166 1729745 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-730125 ssh pgrep buildkitd: exit status 1 (269.722284ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 image build -t localhost/my-image:functional-730125 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-730125 image build -t localhost/my-image:functional-730125 testdata/build --alsologtostderr: (3.337826421s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-730125 image build -t localhost/my-image:functional-730125 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4e919fc399e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-730125
--> 50d58d2c16e
Successfully tagged localhost/my-image:functional-730125
50d58d2c16ef9fd7b5afbd217e85a516437c35ced5ed28a8d81eef4283ef431a
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-730125 image build -t localhost/my-image:functional-730125 testdata/build --alsologtostderr:
I1007 13:25:42.969791 1729928 out.go:345] Setting OutFile to fd 1 ...
I1007 13:25:42.970789 1729928 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 13:25:42.970827 1729928 out.go:358] Setting ErrFile to fd 2...
I1007 13:25:42.970846 1729928 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 13:25:42.971129 1729928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-1688750/.minikube/bin
I1007 13:25:42.971833 1729928 config.go:182] Loaded profile config "functional-730125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 13:25:42.972447 1729928 config.go:182] Loaded profile config "functional-730125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 13:25:42.972996 1729928 cli_runner.go:164] Run: docker container inspect functional-730125 --format={{.State.Status}}
I1007 13:25:42.990095 1729928 ssh_runner.go:195] Run: systemctl --version
I1007 13:25:42.990163 1729928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-730125
I1007 13:25:43.011006 1729928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38276 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/functional-730125/id_rsa Username:docker}
I1007 13:25:43.107906 1729928 build_images.go:161] Building image from path: /tmp/build.3122635823.tar
I1007 13:25:43.107977 1729928 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1007 13:25:43.116854 1729928 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3122635823.tar
I1007 13:25:43.120497 1729928 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3122635823.tar: stat -c "%s %y" /var/lib/minikube/build/build.3122635823.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3122635823.tar': No such file or directory
I1007 13:25:43.120531 1729928 ssh_runner.go:362] scp /tmp/build.3122635823.tar --> /var/lib/minikube/build/build.3122635823.tar (3072 bytes)
I1007 13:25:43.147247 1729928 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3122635823
I1007 13:25:43.156289 1729928 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3122635823 -xf /var/lib/minikube/build/build.3122635823.tar
I1007 13:25:43.165549 1729928 crio.go:315] Building image: /var/lib/minikube/build/build.3122635823
I1007 13:25:43.165628 1729928 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-730125 /var/lib/minikube/build/build.3122635823 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1007 13:25:46.223507 1729928 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-730125 /var/lib/minikube/build/build.3122635823 --cgroup-manager=cgroupfs: (3.057845985s)
I1007 13:25:46.223598 1729928 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3122635823
I1007 13:25:46.233210 1729928 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3122635823.tar
I1007 13:25:46.241836 1729928 build_images.go:217] Built localhost/my-image:functional-730125 from /tmp/build.3122635823.tar
I1007 13:25:46.241869 1729928 build_images.go:133] succeeded building to: functional-730125
I1007 13:25:46.241875 1729928 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
2024/10/07 13:25:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-730125
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 image load --daemon kicbase/echo-server:functional-730125 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-730125 image load --daemon kicbase/echo-server:functional-730125 --alsologtostderr: (1.354541181s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 image load --daemon kicbase/echo-server:functional-730125 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-730125
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 image load --daemon kicbase/echo-server:functional-730125 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 image save kicbase/echo-server:functional-730125 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 image rm kicbase/echo-server:functional-730125 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-730125
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-730125 image save --daemon kicbase/echo-server:functional-730125 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-730125
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-730125
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-730125
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-730125
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (173.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-362969 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1007 13:27:13.830974 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:27:41.532939 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-362969 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m52.636413324s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (173.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-362969 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-362969 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-362969 -- rollout status deployment/busybox: (5.71250062s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-362969 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-362969 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-362969 -- exec busybox-7dff88458-c7s47 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-362969 -- exec busybox-7dff88458-dj9qd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-362969 -- exec busybox-7dff88458-wwxsq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-362969 -- exec busybox-7dff88458-c7s47 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-362969 -- exec busybox-7dff88458-dj9qd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-362969 -- exec busybox-7dff88458-wwxsq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-362969 -- exec busybox-7dff88458-c7s47 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-362969 -- exec busybox-7dff88458-dj9qd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-362969 -- exec busybox-7dff88458-wwxsq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-362969 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-362969 -- exec busybox-7dff88458-c7s47 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-362969 -- exec busybox-7dff88458-c7s47 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-362969 -- exec busybox-7dff88458-dj9qd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-362969 -- exec busybox-7dff88458-dj9qd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-362969 -- exec busybox-7dff88458-wwxsq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-362969 -- exec busybox-7dff88458-wwxsq -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (64.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-362969 -v=7 --alsologtostderr
E1007 13:29:51.654387 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:29:51.660692 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:29:51.672098 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:29:51.693543 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:29:51.735594 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:29:51.817018 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:29:51.979217 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:29:52.300609 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:29:52.942020 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:29:54.224306 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:29:56.785945 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-362969 -v=7 --alsologtostderr: (1m3.893965187s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (64.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-362969 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.014846371s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-362969 status --output json -v=7 --alsologtostderr: (1.15951326s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 cp testdata/cp-test.txt ha-362969:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 cp ha-362969:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1546648790/001/cp-test_ha-362969.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969 "sudo cat /home/docker/cp-test.txt"
E1007 13:30:01.907330 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 cp ha-362969:/home/docker/cp-test.txt ha-362969-m02:/home/docker/cp-test_ha-362969_ha-362969-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m02 "sudo cat /home/docker/cp-test_ha-362969_ha-362969-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 cp ha-362969:/home/docker/cp-test.txt ha-362969-m03:/home/docker/cp-test_ha-362969_ha-362969-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m03 "sudo cat /home/docker/cp-test_ha-362969_ha-362969-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 cp ha-362969:/home/docker/cp-test.txt ha-362969-m04:/home/docker/cp-test_ha-362969_ha-362969-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m04 "sudo cat /home/docker/cp-test_ha-362969_ha-362969-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 cp testdata/cp-test.txt ha-362969-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 cp ha-362969-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1546648790/001/cp-test_ha-362969-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 cp ha-362969-m02:/home/docker/cp-test.txt ha-362969:/home/docker/cp-test_ha-362969-m02_ha-362969.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969 "sudo cat /home/docker/cp-test_ha-362969-m02_ha-362969.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 cp ha-362969-m02:/home/docker/cp-test.txt ha-362969-m03:/home/docker/cp-test_ha-362969-m02_ha-362969-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m03 "sudo cat /home/docker/cp-test_ha-362969-m02_ha-362969-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 cp ha-362969-m02:/home/docker/cp-test.txt ha-362969-m04:/home/docker/cp-test_ha-362969-m02_ha-362969-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m04 "sudo cat /home/docker/cp-test_ha-362969-m02_ha-362969-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 cp testdata/cp-test.txt ha-362969-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 cp ha-362969-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1546648790/001/cp-test_ha-362969-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 cp ha-362969-m03:/home/docker/cp-test.txt ha-362969:/home/docker/cp-test_ha-362969-m03_ha-362969.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969 "sudo cat /home/docker/cp-test_ha-362969-m03_ha-362969.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 cp ha-362969-m03:/home/docker/cp-test.txt ha-362969-m02:/home/docker/cp-test_ha-362969-m03_ha-362969-m02.txt
E1007 13:30:12.150087 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m02 "sudo cat /home/docker/cp-test_ha-362969-m03_ha-362969-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 cp ha-362969-m03:/home/docker/cp-test.txt ha-362969-m04:/home/docker/cp-test_ha-362969-m03_ha-362969-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m04 "sudo cat /home/docker/cp-test_ha-362969-m03_ha-362969-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 cp testdata/cp-test.txt ha-362969-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 cp ha-362969-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1546648790/001/cp-test_ha-362969-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 cp ha-362969-m04:/home/docker/cp-test.txt ha-362969:/home/docker/cp-test_ha-362969-m04_ha-362969.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969 "sudo cat /home/docker/cp-test_ha-362969-m04_ha-362969.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 cp ha-362969-m04:/home/docker/cp-test.txt ha-362969-m02:/home/docker/cp-test_ha-362969-m04_ha-362969-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m02 "sudo cat /home/docker/cp-test_ha-362969-m04_ha-362969-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 cp ha-362969-m04:/home/docker/cp-test.txt ha-362969-m03:/home/docker/cp-test_ha-362969-m04_ha-362969-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 ssh -n ha-362969-m03 "sudo cat /home/docker/cp-test_ha-362969-m04_ha-362969-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-362969 node stop m02 -v=7 --alsologtostderr: (12.03508031s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-362969 status -v=7 --alsologtostderr: exit status 7 (754.743601ms)

                                                
                                                
-- stdout --
	ha-362969
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-362969-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-362969-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-362969-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:30:30.604680 1745671 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:30:30.604903 1745671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:30:30.604933 1745671 out.go:358] Setting ErrFile to fd 2...
	I1007 13:30:30.604954 1745671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:30:30.605234 1745671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-1688750/.minikube/bin
	I1007 13:30:30.605443 1745671 out.go:352] Setting JSON to false
	I1007 13:30:30.605499 1745671 mustload.go:65] Loading cluster: ha-362969
	I1007 13:30:30.605584 1745671 notify.go:220] Checking for updates...
	I1007 13:30:30.605953 1745671 config.go:182] Loaded profile config "ha-362969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:30:30.605993 1745671 status.go:174] checking status of ha-362969 ...
	I1007 13:30:30.606565 1745671 cli_runner.go:164] Run: docker container inspect ha-362969 --format={{.State.Status}}
	I1007 13:30:30.631589 1745671 status.go:371] ha-362969 host status = "Running" (err=<nil>)
	I1007 13:30:30.631613 1745671 host.go:66] Checking if "ha-362969" exists ...
	I1007 13:30:30.631920 1745671 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-362969
	I1007 13:30:30.662086 1745671 host.go:66] Checking if "ha-362969" exists ...
	I1007 13:30:30.662607 1745671 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 13:30:30.662657 1745671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969
	I1007 13:30:30.681590 1745671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38281 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/ha-362969/id_rsa Username:docker}
	I1007 13:30:30.782213 1745671 ssh_runner.go:195] Run: systemctl --version
	I1007 13:30:30.787003 1745671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:30:30.799180 1745671 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:30:30.872467 1745671 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-10-07 13:30:30.861017733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:30:30.873070 1745671 kubeconfig.go:125] found "ha-362969" server: "https://192.168.49.254:8443"
	I1007 13:30:30.873109 1745671 api_server.go:166] Checking apiserver status ...
	I1007 13:30:30.873156 1745671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:30:30.884912 1745671 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	I1007 13:30:30.894687 1745671 api_server.go:182] apiserver freezer: "5:freezer:/docker/c4808aca2e7dbdef64b564ae3cdc7d364bfdf2ea6a7fa88618716805ca19bddb/crio/crio-5827bc0c8f9dce97ce0589099e4a9cbd86ed9004d9d31fc45821405d8a82b59d"
	I1007 13:30:30.894773 1745671 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c4808aca2e7dbdef64b564ae3cdc7d364bfdf2ea6a7fa88618716805ca19bddb/crio/crio-5827bc0c8f9dce97ce0589099e4a9cbd86ed9004d9d31fc45821405d8a82b59d/freezer.state
	I1007 13:30:30.905021 1745671 api_server.go:204] freezer state: "THAWED"
	I1007 13:30:30.905050 1745671 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1007 13:30:30.913533 1745671 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1007 13:30:30.913563 1745671 status.go:463] ha-362969 apiserver status = Running (err=<nil>)
	I1007 13:30:30.913574 1745671 status.go:176] ha-362969 status: &{Name:ha-362969 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 13:30:30.913590 1745671 status.go:174] checking status of ha-362969-m02 ...
	I1007 13:30:30.913977 1745671 cli_runner.go:164] Run: docker container inspect ha-362969-m02 --format={{.State.Status}}
	I1007 13:30:30.933110 1745671 status.go:371] ha-362969-m02 host status = "Stopped" (err=<nil>)
	I1007 13:30:30.933139 1745671 status.go:384] host is not running, skipping remaining checks
	I1007 13:30:30.933147 1745671 status.go:176] ha-362969-m02 status: &{Name:ha-362969-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 13:30:30.933167 1745671 status.go:174] checking status of ha-362969-m03 ...
	I1007 13:30:30.933492 1745671 cli_runner.go:164] Run: docker container inspect ha-362969-m03 --format={{.State.Status}}
	I1007 13:30:30.950356 1745671 status.go:371] ha-362969-m03 host status = "Running" (err=<nil>)
	I1007 13:30:30.950380 1745671 host.go:66] Checking if "ha-362969-m03" exists ...
	I1007 13:30:30.950694 1745671 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-362969-m03
	I1007 13:30:30.968861 1745671 host.go:66] Checking if "ha-362969-m03" exists ...
	I1007 13:30:30.969178 1745671 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 13:30:30.969221 1745671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969-m03
	I1007 13:30:30.986878 1745671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38291 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/ha-362969-m03/id_rsa Username:docker}
	I1007 13:30:31.080952 1745671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:30:31.093341 1745671 kubeconfig.go:125] found "ha-362969" server: "https://192.168.49.254:8443"
	I1007 13:30:31.093373 1745671 api_server.go:166] Checking apiserver status ...
	I1007 13:30:31.093420 1745671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:30:31.106456 1745671 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1310/cgroup
	I1007 13:30:31.116901 1745671 api_server.go:182] apiserver freezer: "5:freezer:/docker/6f225576ccba9ef8aaabbc51540cf8af041b8d07520b2e16d586c853d873c1a0/crio/crio-499e13f5cffcca5b4266c882a89426e8df0629d6f5696c1586f540a7765a8fb4"
	I1007 13:30:31.116973 1745671 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6f225576ccba9ef8aaabbc51540cf8af041b8d07520b2e16d586c853d873c1a0/crio/crio-499e13f5cffcca5b4266c882a89426e8df0629d6f5696c1586f540a7765a8fb4/freezer.state
	I1007 13:30:31.127117 1745671 api_server.go:204] freezer state: "THAWED"
	I1007 13:30:31.127145 1745671 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1007 13:30:31.134902 1745671 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1007 13:30:31.134931 1745671 status.go:463] ha-362969-m03 apiserver status = Running (err=<nil>)
	I1007 13:30:31.134940 1745671 status.go:176] ha-362969-m03 status: &{Name:ha-362969-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 13:30:31.134985 1745671 status.go:174] checking status of ha-362969-m04 ...
	I1007 13:30:31.135331 1745671 cli_runner.go:164] Run: docker container inspect ha-362969-m04 --format={{.State.Status}}
	I1007 13:30:31.153206 1745671 status.go:371] ha-362969-m04 host status = "Running" (err=<nil>)
	I1007 13:30:31.153232 1745671 host.go:66] Checking if "ha-362969-m04" exists ...
	I1007 13:30:31.153529 1745671 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-362969-m04
	I1007 13:30:31.169522 1745671 host.go:66] Checking if "ha-362969-m04" exists ...
	I1007 13:30:31.169811 1745671 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 13:30:31.169926 1745671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-362969-m04
	I1007 13:30:31.188920 1745671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38296 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/ha-362969-m04/id_rsa Username:docker}
	I1007 13:30:31.284617 1745671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:30:31.296376 1745671 status.go:176] ha-362969-m04 status: &{Name:ha-362969-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (25.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 node start m02 -v=7 --alsologtostderr
E1007 13:30:32.632219 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-362969 node start m02 -v=7 --alsologtostderr: (23.448473167s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-362969 status -v=7 --alsologtostderr: (1.595422855s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (25.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.346543603s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (247.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-362969 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-362969 -v=7 --alsologtostderr
E1007 13:31:13.597741 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-362969 -v=7 --alsologtostderr: (37.329172032s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-362969 --wait=true -v=7 --alsologtostderr
E1007 13:32:13.830892 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:32:35.519049 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:34:51.654465 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-362969 --wait=true -v=7 --alsologtostderr: (3m29.883304892s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-362969
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (247.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-362969 node delete m03 -v=7 --alsologtostderr: (11.583392503s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1007 13:35:19.361123 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-362969 stop -v=7 --alsologtostderr: (35.732092309s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-362969 status -v=7 --alsologtostderr: exit status 7 (119.997386ms)

                                                
                                                
-- stdout --
	ha-362969
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-362969-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-362969-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:35:55.300518 1760284 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:35:55.300723 1760284 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:35:55.300751 1760284 out.go:358] Setting ErrFile to fd 2...
	I1007 13:35:55.300770 1760284 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:35:55.301023 1760284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-1688750/.minikube/bin
	I1007 13:35:55.301224 1760284 out.go:352] Setting JSON to false
	I1007 13:35:55.301276 1760284 mustload.go:65] Loading cluster: ha-362969
	I1007 13:35:55.301305 1760284 notify.go:220] Checking for updates...
	I1007 13:35:55.301750 1760284 config.go:182] Loaded profile config "ha-362969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:35:55.301792 1760284 status.go:174] checking status of ha-362969 ...
	I1007 13:35:55.302627 1760284 cli_runner.go:164] Run: docker container inspect ha-362969 --format={{.State.Status}}
	I1007 13:35:55.320911 1760284 status.go:371] ha-362969 host status = "Stopped" (err=<nil>)
	I1007 13:35:55.320934 1760284 status.go:384] host is not running, skipping remaining checks
	I1007 13:35:55.320941 1760284 status.go:176] ha-362969 status: &{Name:ha-362969 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 13:35:55.320966 1760284 status.go:174] checking status of ha-362969-m02 ...
	I1007 13:35:55.321283 1760284 cli_runner.go:164] Run: docker container inspect ha-362969-m02 --format={{.State.Status}}
	I1007 13:35:55.347681 1760284 status.go:371] ha-362969-m02 host status = "Stopped" (err=<nil>)
	I1007 13:35:55.347705 1760284 status.go:384] host is not running, skipping remaining checks
	I1007 13:35:55.347712 1760284 status.go:176] ha-362969-m02 status: &{Name:ha-362969-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 13:35:55.347730 1760284 status.go:174] checking status of ha-362969-m04 ...
	I1007 13:35:55.348020 1760284 cli_runner.go:164] Run: docker container inspect ha-362969-m04 --format={{.State.Status}}
	I1007 13:35:55.365127 1760284 status.go:371] ha-362969-m04 host status = "Stopped" (err=<nil>)
	I1007 13:35:55.365152 1760284 status.go:384] host is not running, skipping remaining checks
	I1007 13:35:55.365159 1760284 status.go:176] ha-362969-m04 status: &{Name:ha-362969-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (71.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-362969 --control-plane -v=7 --alsologtostderr
E1007 13:38:36.894548 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-362969 --control-plane -v=7 --alsologtostderr: (1m10.646883497s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-362969 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (71.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.59s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-609915 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1007 13:39:51.656688 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-609915 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m14.584501191s)
--- PASS: TestJSONOutput/start/Command (74.59s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-609915 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-609915 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-609915 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-609915 --output=json --user=testUser: (5.949221863s)
--- PASS: TestJSONOutput/stop/Command (5.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-003148 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-003148 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (86.256765ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"587d67f1-569a-47ed-801d-138d18806842","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-003148] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"da440522-e348-4e70-ad6d-f93f47127d47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18424"}}
	{"specversion":"1.0","id":"4ad5077d-6977-44c4-b131-92e87d835cc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c2c02a5b-1290-4b8d-a7e7-eb37dd2d0d8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18424-1688750/kubeconfig"}}
	{"specversion":"1.0","id":"30e8ae96-3bbc-4403-a398-12a26a9b040b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-1688750/.minikube"}}
	{"specversion":"1.0","id":"75fee6bd-4293-4daa-bfe3-1310466843ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"06dd5cbc-f744-46b7-9e5d-460aa2be7242","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"25da8767-8e0e-4026-9ef6-610b2bf8ac05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-003148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-003148
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-445229 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-445229 --network=: (37.140575788s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-445229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-445229
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-445229: (2.154071574s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.32s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.46s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-188859 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-188859 --network=bridge: (34.821388879s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-188859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-188859
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-188859: (1.618167415s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.46s)

                                                
                                    
x
+
TestKicExistingNetwork (34.23s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1007 13:42:11.841085 1694126 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1007 13:42:11.856945 1694126 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1007 13:42:11.857023 1694126 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1007 13:42:11.857040 1694126 cli_runner.go:164] Run: docker network inspect existing-network
W1007 13:42:11.872586 1694126 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1007 13:42:11.872617 1694126 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1007 13:42:11.872633 1694126 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1007 13:42:11.872848 1694126 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1007 13:42:11.889063 1694126 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0ea298a5d452 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:18:06:d4:dd} reservation:<nil>}
I1007 13:42:11.889512 1694126 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001f46ff0}
I1007 13:42:11.889586 1694126 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1007 13:42:11.889641 1694126 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1007 13:42:11.960286 1694126 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-309486 --network=existing-network
E1007 13:42:13.830941 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-309486 --network=existing-network: (32.062438965s)
helpers_test.go:175: Cleaning up "existing-network-309486" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-309486
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-309486: (2.01486232s)
I1007 13:42:46.054969 1694126 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.23s)

                                                
                                    
x
+
TestKicCustomSubnet (34.02s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-102806 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-102806 --subnet=192.168.60.0/24: (31.921605547s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-102806 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-102806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-102806
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-102806: (2.077655355s)
--- PASS: TestKicCustomSubnet (34.02s)

                                                
                                    
x
+
TestKicStaticIP (30.89s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-888701 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-888701 --static-ip=192.168.200.200: (28.602087213s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-888701 ip
helpers_test.go:175: Cleaning up "static-ip-888701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-888701
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-888701: (2.13740128s)
--- PASS: TestKicStaticIP (30.89s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (64.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-988482 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-988482 --driver=docker  --container-runtime=crio: (27.972377208s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-991058 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-991058 --driver=docker  --container-runtime=crio: (30.929246489s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-988482
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-991058
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-991058" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-991058
E1007 13:44:51.655041 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-991058: (1.939621641s)
helpers_test.go:175: Cleaning up "first-988482" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-988482
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-988482: (1.952533975s)
--- PASS: TestMinikubeProfile (64.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-229512 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-229512 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.154252506s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-229512 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-231711 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-231711 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.543923922s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-231711 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-229512 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-229512 --alsologtostderr -v=5: (1.641527276s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-231711 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-231711
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-231711: (1.223779678s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.73s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-231711
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-231711: (6.73006992s)
--- PASS: TestMountStart/serial/RestartStopped (7.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-231711 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-014879 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1007 13:46:14.723144 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:47:13.830572 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-014879 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m52.411160464s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-014879 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-014879 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-014879 -- rollout status deployment/busybox: (6.257891345s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-014879 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-014879 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-014879 -- exec busybox-7dff88458-b98g2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-014879 -- exec busybox-7dff88458-xmkwx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-014879 -- exec busybox-7dff88458-b98g2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-014879 -- exec busybox-7dff88458-xmkwx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-014879 -- exec busybox-7dff88458-b98g2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-014879 -- exec busybox-7dff88458-xmkwx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.15s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-014879 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-014879 -- exec busybox-7dff88458-b98g2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-014879 -- exec busybox-7dff88458-b98g2 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-014879 -- exec busybox-7dff88458-xmkwx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-014879 -- exec busybox-7dff88458-xmkwx -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-014879 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-014879 -v 3 --alsologtostderr: (58.213569956s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.87s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-014879 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 cp testdata/cp-test.txt multinode-014879:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 ssh -n multinode-014879 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 cp multinode-014879:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2885650611/001/cp-test_multinode-014879.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 ssh -n multinode-014879 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 cp multinode-014879:/home/docker/cp-test.txt multinode-014879-m02:/home/docker/cp-test_multinode-014879_multinode-014879-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 ssh -n multinode-014879 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 ssh -n multinode-014879-m02 "sudo cat /home/docker/cp-test_multinode-014879_multinode-014879-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 cp multinode-014879:/home/docker/cp-test.txt multinode-014879-m03:/home/docker/cp-test_multinode-014879_multinode-014879-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 ssh -n multinode-014879 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 ssh -n multinode-014879-m03 "sudo cat /home/docker/cp-test_multinode-014879_multinode-014879-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 cp testdata/cp-test.txt multinode-014879-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 ssh -n multinode-014879-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 cp multinode-014879-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2885650611/001/cp-test_multinode-014879-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 ssh -n multinode-014879-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 cp multinode-014879-m02:/home/docker/cp-test.txt multinode-014879:/home/docker/cp-test_multinode-014879-m02_multinode-014879.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 ssh -n multinode-014879-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 ssh -n multinode-014879 "sudo cat /home/docker/cp-test_multinode-014879-m02_multinode-014879.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 cp multinode-014879-m02:/home/docker/cp-test.txt multinode-014879-m03:/home/docker/cp-test_multinode-014879-m02_multinode-014879-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 ssh -n multinode-014879-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 ssh -n multinode-014879-m03 "sudo cat /home/docker/cp-test_multinode-014879-m02_multinode-014879-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 cp testdata/cp-test.txt multinode-014879-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 ssh -n multinode-014879-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 cp multinode-014879-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2885650611/001/cp-test_multinode-014879-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 ssh -n multinode-014879-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 cp multinode-014879-m03:/home/docker/cp-test.txt multinode-014879:/home/docker/cp-test_multinode-014879-m03_multinode-014879.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 ssh -n multinode-014879-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 ssh -n multinode-014879 "sudo cat /home/docker/cp-test_multinode-014879-m03_multinode-014879.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 cp multinode-014879-m03:/home/docker/cp-test.txt multinode-014879-m02:/home/docker/cp-test_multinode-014879-m03_multinode-014879-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 ssh -n multinode-014879-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 ssh -n multinode-014879-m02 "sudo cat /home/docker/cp-test_multinode-014879-m03_multinode-014879-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-014879 node stop m03: (1.219224822s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-014879 status: exit status 7 (500.71421ms)

                                                
                                                
-- stdout --
	multinode-014879
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-014879-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-014879-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-014879 status --alsologtostderr: exit status 7 (537.923644ms)

                                                
                                                
-- stdout --
	multinode-014879
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-014879-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-014879-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:48:36.021861 1814608 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:48:36.022024 1814608 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:48:36.022036 1814608 out.go:358] Setting ErrFile to fd 2...
	I1007 13:48:36.022042 1814608 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:48:36.022350 1814608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-1688750/.minikube/bin
	I1007 13:48:36.022543 1814608 out.go:352] Setting JSON to false
	I1007 13:48:36.022580 1814608 mustload.go:65] Loading cluster: multinode-014879
	I1007 13:48:36.022646 1814608 notify.go:220] Checking for updates...
	I1007 13:48:36.022993 1814608 config.go:182] Loaded profile config "multinode-014879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:48:36.023006 1814608 status.go:174] checking status of multinode-014879 ...
	I1007 13:48:36.023916 1814608 cli_runner.go:164] Run: docker container inspect multinode-014879 --format={{.State.Status}}
	I1007 13:48:36.043164 1814608 status.go:371] multinode-014879 host status = "Running" (err=<nil>)
	I1007 13:48:36.043192 1814608 host.go:66] Checking if "multinode-014879" exists ...
	I1007 13:48:36.043501 1814608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-014879
	I1007 13:48:36.071458 1814608 host.go:66] Checking if "multinode-014879" exists ...
	I1007 13:48:36.071818 1814608 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 13:48:36.071874 1814608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-014879
	I1007 13:48:36.095469 1814608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38401 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/multinode-014879/id_rsa Username:docker}
	I1007 13:48:36.188672 1814608 ssh_runner.go:195] Run: systemctl --version
	I1007 13:48:36.192841 1814608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:48:36.204469 1814608 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:48:36.266404 1814608 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-10-07 13:48:36.256216732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:48:36.266998 1814608 kubeconfig.go:125] found "multinode-014879" server: "https://192.168.67.2:8443"
	I1007 13:48:36.267039 1814608 api_server.go:166] Checking apiserver status ...
	I1007 13:48:36.267087 1814608 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:48:36.278303 1814608 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1422/cgroup
	I1007 13:48:36.287814 1814608 api_server.go:182] apiserver freezer: "5:freezer:/docker/9a9aa10647457b699096d3f35e9cd452116034526c0f3c3fcea4f30783e24b30/crio/crio-3cb1f4c42d4939a506cb2b7ac9c08deffee416eb619eab1051dc23d5869a2392"
	I1007 13:48:36.287878 1814608 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9a9aa10647457b699096d3f35e9cd452116034526c0f3c3fcea4f30783e24b30/crio/crio-3cb1f4c42d4939a506cb2b7ac9c08deffee416eb619eab1051dc23d5869a2392/freezer.state
	I1007 13:48:36.296605 1814608 api_server.go:204] freezer state: "THAWED"
	I1007 13:48:36.296633 1814608 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1007 13:48:36.304493 1814608 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1007 13:48:36.304523 1814608 status.go:463] multinode-014879 apiserver status = Running (err=<nil>)
	I1007 13:48:36.304534 1814608 status.go:176] multinode-014879 status: &{Name:multinode-014879 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 13:48:36.304550 1814608 status.go:174] checking status of multinode-014879-m02 ...
	I1007 13:48:36.304874 1814608 cli_runner.go:164] Run: docker container inspect multinode-014879-m02 --format={{.State.Status}}
	I1007 13:48:36.321522 1814608 status.go:371] multinode-014879-m02 host status = "Running" (err=<nil>)
	I1007 13:48:36.321545 1814608 host.go:66] Checking if "multinode-014879-m02" exists ...
	I1007 13:48:36.321817 1814608 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-014879-m02
	I1007 13:48:36.337596 1814608 host.go:66] Checking if "multinode-014879-m02" exists ...
	I1007 13:48:36.337910 1814608 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 13:48:36.337967 1814608 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-014879-m02
	I1007 13:48:36.359094 1814608 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38406 SSHKeyPath:/home/jenkins/minikube-integration/18424-1688750/.minikube/machines/multinode-014879-m02/id_rsa Username:docker}
	I1007 13:48:36.452742 1814608 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:48:36.463852 1814608 status.go:176] multinode-014879-m02 status: &{Name:multinode-014879-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1007 13:48:36.463886 1814608 status.go:174] checking status of multinode-014879-m03 ...
	I1007 13:48:36.464166 1814608 cli_runner.go:164] Run: docker container inspect multinode-014879-m03 --format={{.State.Status}}
	I1007 13:48:36.486277 1814608 status.go:371] multinode-014879-m03 host status = "Stopped" (err=<nil>)
	I1007 13:48:36.486300 1814608 status.go:384] host is not running, skipping remaining checks
	I1007 13:48:36.486306 1814608 status.go:176] multinode-014879-m03 status: &{Name:multinode-014879-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-014879 node start m03 -v=7 --alsologtostderr: (9.016011257s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-014879
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-014879
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-014879: (24.84154441s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-014879 --wait=true -v=8 --alsologtostderr
E1007 13:49:51.654725 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-014879 --wait=true -v=8 --alsologtostderr: (55.966989924s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-014879
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-014879 node delete m03: (4.609906814s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-014879 stop: (23.685290295s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-014879 status: exit status 7 (102.663265ms)

                                                
                                                
-- stdout --
	multinode-014879
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-014879-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-014879 status --alsologtostderr: exit status 7 (93.139898ms)

                                                
                                                
-- stdout --
	multinode-014879
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-014879-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:50:36.368858 1822024 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:50:36.369077 1822024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:50:36.369104 1822024 out.go:358] Setting ErrFile to fd 2...
	I1007 13:50:36.369126 1822024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:50:36.369488 1822024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-1688750/.minikube/bin
	I1007 13:50:36.369775 1822024 out.go:352] Setting JSON to false
	I1007 13:50:36.369835 1822024 mustload.go:65] Loading cluster: multinode-014879
	I1007 13:50:36.370596 1822024 notify.go:220] Checking for updates...
	I1007 13:50:36.370836 1822024 config.go:182] Loaded profile config "multinode-014879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:50:36.370868 1822024 status.go:174] checking status of multinode-014879 ...
	I1007 13:50:36.371414 1822024 cli_runner.go:164] Run: docker container inspect multinode-014879 --format={{.State.Status}}
	I1007 13:50:36.388351 1822024 status.go:371] multinode-014879 host status = "Stopped" (err=<nil>)
	I1007 13:50:36.388372 1822024 status.go:384] host is not running, skipping remaining checks
	I1007 13:50:36.388379 1822024 status.go:176] multinode-014879 status: &{Name:multinode-014879 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 13:50:36.388418 1822024 status.go:174] checking status of multinode-014879-m02 ...
	I1007 13:50:36.388731 1822024 cli_runner.go:164] Run: docker container inspect multinode-014879-m02 --format={{.State.Status}}
	I1007 13:50:36.405297 1822024 status.go:371] multinode-014879-m02 host status = "Stopped" (err=<nil>)
	I1007 13:50:36.405317 1822024 status.go:384] host is not running, skipping remaining checks
	I1007 13:50:36.405324 1822024 status.go:176] multinode-014879-m02 status: &{Name:multinode-014879-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-014879 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-014879 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (48.801472467s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-014879 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.50s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-014879
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-014879-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-014879-m02 --driver=docker  --container-runtime=crio: exit status 14 (96.951301ms)

                                                
                                                
-- stdout --
	* [multinode-014879-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-1688750/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-1688750/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-014879-m02' is duplicated with machine name 'multinode-014879-m02' in profile 'multinode-014879'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-014879-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-014879-m03 --driver=docker  --container-runtime=crio: (37.420500996s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-014879
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-014879: exit status 80 (348.68334ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-014879 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-014879-m03 already exists in multinode-014879-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-014879-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-014879-m03: (1.980171468s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.90s)

                                                
                                    
x
+
TestPreload (128.91s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-373876 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1007 13:52:13.831118 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-373876 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m36.006964391s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-373876 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-373876 image pull gcr.io/k8s-minikube/busybox: (3.560159012s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-373876
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-373876: (5.817364177s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-373876 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-373876 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (20.648713121s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-373876 image list
helpers_test.go:175: Cleaning up "test-preload-373876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-373876
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-373876: (2.519699453s)
--- PASS: TestPreload (128.91s)

                                                
                                    
x
+
TestScheduledStopUnix (109.46s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-487805 --memory=2048 --driver=docker  --container-runtime=crio
E1007 13:54:51.654333 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-487805 --memory=2048 --driver=docker  --container-runtime=crio: (32.948144999s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-487805 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-487805 -n scheduled-stop-487805
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-487805 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1007 13:54:52.303072 1694126 retry.go:31] will retry after 70.487µs: open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/scheduled-stop-487805/pid: no such file or directory
I1007 13:54:52.303210 1694126 retry.go:31] will retry after 102.001µs: open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/scheduled-stop-487805/pid: no such file or directory
I1007 13:54:52.304304 1694126 retry.go:31] will retry after 185.431µs: open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/scheduled-stop-487805/pid: no such file or directory
I1007 13:54:52.305368 1694126 retry.go:31] will retry after 345.635µs: open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/scheduled-stop-487805/pid: no such file or directory
I1007 13:54:52.306485 1694126 retry.go:31] will retry after 643.407µs: open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/scheduled-stop-487805/pid: no such file or directory
I1007 13:54:52.309191 1694126 retry.go:31] will retry after 934.588µs: open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/scheduled-stop-487805/pid: no such file or directory
I1007 13:54:52.310280 1694126 retry.go:31] will retry after 1.617122ms: open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/scheduled-stop-487805/pid: no such file or directory
I1007 13:54:52.312481 1694126 retry.go:31] will retry after 1.559048ms: open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/scheduled-stop-487805/pid: no such file or directory
I1007 13:54:52.314645 1694126 retry.go:31] will retry after 1.436451ms: open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/scheduled-stop-487805/pid: no such file or directory
I1007 13:54:52.316878 1694126 retry.go:31] will retry after 5.420099ms: open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/scheduled-stop-487805/pid: no such file or directory
I1007 13:54:52.323106 1694126 retry.go:31] will retry after 8.626748ms: open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/scheduled-stop-487805/pid: no such file or directory
I1007 13:54:52.332362 1694126 retry.go:31] will retry after 7.991189ms: open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/scheduled-stop-487805/pid: no such file or directory
I1007 13:54:52.340660 1694126 retry.go:31] will retry after 17.704577ms: open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/scheduled-stop-487805/pid: no such file or directory
I1007 13:54:52.363615 1694126 retry.go:31] will retry after 14.099686ms: open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/scheduled-stop-487805/pid: no such file or directory
I1007 13:54:52.378840 1694126 retry.go:31] will retry after 16.296388ms: open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/scheduled-stop-487805/pid: no such file or directory
I1007 13:54:52.396070 1694126 retry.go:31] will retry after 41.224237ms: open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/scheduled-stop-487805/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-487805 --cancel-scheduled
E1007 13:55:16.896178 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-487805 -n scheduled-stop-487805
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-487805
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-487805 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-487805
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-487805: exit status 7 (72.025162ms)

                                                
                                                
-- stdout --
	scheduled-stop-487805
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-487805 -n scheduled-stop-487805
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-487805 -n scheduled-stop-487805: exit status 7 (72.267592ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-487805" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-487805
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-487805: (4.873072668s)
--- PASS: TestScheduledStopUnix (109.46s)

                                                
                                    
x
+
TestInsufficientStorage (10.38s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-033743 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-033743 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.886007024s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"182b9e22-a5b1-49c6-a975-2e0049e089e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-033743] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e118f2e-7176-4232-bdc6-b523e266a8b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18424"}}
	{"specversion":"1.0","id":"6159ced4-35aa-45e2-a1ae-159ba8b8ffc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a47af513-8326-4b23-b4ef-e83eb5b41a84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18424-1688750/kubeconfig"}}
	{"specversion":"1.0","id":"cafebb29-04db-4bf7-9ee0-71126bb6b223","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-1688750/.minikube"}}
	{"specversion":"1.0","id":"fada3b56-0caf-4227-83f7-37749c6879aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f5868cf1-fcac-4d46-9633-25a94ba1803f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0736b958-a639-47ce-aae7-6462833ca1c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"289bbf84-a41e-41bd-8159-3e7306ed3f4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0f319886-0e64-4ab1-b24b-076cc2db4771","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"07ae0537-db83-48aa-b6ed-cf51a68185a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"371a6921-ead0-455b-b69b-fc03689243f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-033743\" primary control-plane node in \"insufficient-storage-033743\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"337bff34-5d30-42ea-a4af-4ca62e323362","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727731891-master ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"121a8f12-eee6-45b6-b057-3c9270840f3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"55045324-38b5-46c8-8a28-a5ab39ecf4a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-033743 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-033743 --output=json --layout=cluster: exit status 7 (303.766354ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-033743","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-033743","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 13:56:16.454569 1839902 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-033743" does not appear in /home/jenkins/minikube-integration/18424-1688750/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-033743 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-033743 --output=json --layout=cluster: exit status 7 (296.434072ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-033743","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-033743","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 13:56:16.751060 1839963 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-033743" does not appear in /home/jenkins/minikube-integration/18424-1688750/kubeconfig
	E1007 13:56:16.761949 1839963 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/insufficient-storage-033743/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-033743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-033743
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-033743: (1.895096818s)
--- PASS: TestInsufficientStorage (10.38s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (66.26s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1700501154 start -p running-upgrade-357116 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1700501154 start -p running-upgrade-357116 --memory=2200 --vm-driver=docker  --container-runtime=crio: (38.592393741s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-357116 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-357116 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.724914034s)
helpers_test.go:175: Cleaning up "running-upgrade-357116" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-357116
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-357116: (2.975890768s)
--- PASS: TestRunningBinaryUpgrade (66.26s)

                                                
                                    
x
+
TestKubernetesUpgrade (386.18s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-431816 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-431816 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m11.96126402s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-431816
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-431816: (1.850060083s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-431816 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-431816 status --format={{.Host}}: exit status 7 (104.258244ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-431816 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-431816 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m34.504659445s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-431816 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-431816 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-431816 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (134.156742ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-431816] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-1688750/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-1688750/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-431816
	    minikube start -p kubernetes-upgrade-431816 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4318162 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-431816 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-431816 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-431816 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.826442241s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-431816" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-431816
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-431816: (2.661323114s)
--- PASS: TestKubernetesUpgrade (386.18s)

                                                
                                    
x
+
TestMissingContainerUpgrade (153.48s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1322927671 start -p missing-upgrade-245588 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1322927671 start -p missing-upgrade-245588 --memory=2200 --driver=docker  --container-runtime=crio: (1m22.668125913s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-245588
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-245588: (10.461689762s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-245588
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-245588 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-245588 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (57.167604756s)
helpers_test.go:175: Cleaning up "missing-upgrade-245588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-245588
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-245588: (2.19530045s)
--- PASS: TestMissingContainerUpgrade (153.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-433990 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-433990 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (95.447137ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-433990] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-1688750/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-1688750/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-433990 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-433990 --driver=docker  --container-runtime=crio: (39.487540743s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-433990 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-433990 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-433990 --no-kubernetes --driver=docker  --container-runtime=crio: (5.909782833s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-433990 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-433990 status -o json: exit status 2 (406.154981ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-433990","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-433990
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-433990: (3.93572728s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-433990 --no-kubernetes --driver=docker  --container-runtime=crio
E1007 13:57:13.830886 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-433990 --no-kubernetes --driver=docker  --container-runtime=crio: (9.19562092s)
--- PASS: TestNoKubernetes/serial/Start (9.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-433990 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-433990 "sudo systemctl is-active --quiet service kubelet": exit status 1 (336.268783ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-433990
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-433990: (1.291988425s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-433990 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-433990 --driver=docker  --container-runtime=crio: (7.881605452s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-433990 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-433990 "sudo systemctl is-active --quiet service kubelet": exit status 1 (391.398718ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (76.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2554328149 start -p stopped-upgrade-143744 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2554328149 start -p stopped-upgrade-143744 --memory=2200 --vm-driver=docker  --container-runtime=crio: (40.20474007s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2554328149 -p stopped-upgrade-143744 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2554328149 -p stopped-upgrade-143744 stop: (2.673662604s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-143744 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1007 13:59:51.655000 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-143744 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.320309808s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (76.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-143744
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    
x
+
TestPause/serial/Start (49.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-603795 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-603795 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (49.985622207s)
--- PASS: TestPause/serial/Start (49.99s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (22.48s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-603795 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1007 14:02:13.830801 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-603795 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.461080271s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (22.48s)

                                                
                                    
x
+
TestPause/serial/Pause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-603795 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.87s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-603795 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-603795 --output=json --layout=cluster: exit status 2 (427.774126ms)

                                                
                                                
-- stdout --
	{"Name":"pause-603795","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-603795","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-603795 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.76s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.86s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-603795 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.86s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.87s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-603795 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-603795 --alsologtostderr -v=5: (2.867191158s)
--- PASS: TestPause/serial/DeletePaused (2.87s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-603795
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-603795: exit status 1 (19.642992ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-603795: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-254617 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-254617 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (382.549767ms)

                                                
                                                
-- stdout --
	* [false-254617] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-1688750/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-1688750/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 14:03:18.298241 1877745 out.go:345] Setting OutFile to fd 1 ...
	I1007 14:03:18.298381 1877745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 14:03:18.298389 1877745 out.go:358] Setting ErrFile to fd 2...
	I1007 14:03:18.298395 1877745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 14:03:18.298637 1877745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-1688750/.minikube/bin
	I1007 14:03:18.299057 1877745 out.go:352] Setting JSON to false
	I1007 14:03:18.300111 1877745 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":99950,"bootTime":1728209849,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1007 14:03:18.300190 1877745 start.go:139] virtualization:  
	I1007 14:03:18.303703 1877745 out.go:177] * [false-254617] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 14:03:18.307205 1877745 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 14:03:18.307375 1877745 notify.go:220] Checking for updates...
	I1007 14:03:18.312749 1877745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 14:03:18.315420 1877745 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-1688750/kubeconfig
	I1007 14:03:18.318408 1877745 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-1688750/.minikube
	I1007 14:03:18.321040 1877745 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 14:03:18.323711 1877745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 14:03:18.326990 1877745 config.go:182] Loaded profile config "kubernetes-upgrade-431816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 14:03:18.327163 1877745 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 14:03:18.373661 1877745 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 14:03:18.373807 1877745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 14:03:18.564736 1877745 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-07 14:03:18.554763989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 14:03:18.564844 1877745 docker.go:318] overlay module found
	I1007 14:03:18.572020 1877745 out.go:177] * Using the docker driver based on user configuration
	I1007 14:03:18.574825 1877745 start.go:297] selected driver: docker
	I1007 14:03:18.574846 1877745 start.go:901] validating driver "docker" against <nil>
	I1007 14:03:18.574859 1877745 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 14:03:18.579734 1877745 out.go:201] 
	W1007 14:03:18.582392 1877745 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1007 14:03:18.584997 1877745 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-254617 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-254617

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-254617

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-254617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-254617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-254617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-254617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-254617

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-254617

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-254617

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-254617

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-254617

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-254617" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-254617" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Oct 2024 14:03:18 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-431816
contexts:
- context:
cluster: kubernetes-upgrade-431816
extensions:
- extension:
last-update: Mon, 07 Oct 2024 14:03:18 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: kubernetes-upgrade-431816
name: kubernetes-upgrade-431816
current-context: kubernetes-upgrade-431816
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-431816
user:
client-certificate: /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/kubernetes-upgrade-431816/client.crt
client-key: /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/kubernetes-upgrade-431816/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-254617

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-254617"

                                                
                                                
----------------------- debugLogs end: false-254617 [took: 5.602536968s] --------------------------------
helpers_test.go:175: Cleaning up "false-254617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-254617
--- PASS: TestNetworkPlugins/group/false (6.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (162.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-757661 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E1007 14:04:51.654458 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:07:13.830304 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-757661 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m42.512352782s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (162.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-757661 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8cf74ffe-43e4-457a-a9cc-77e6e213d287] Pending
helpers_test.go:344: "busybox" [8cf74ffe-43e4-457a-a9cc-77e6e213d287] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8cf74ffe-43e4-457a-a9cc-77e6e213d287] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.00477466s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-757661 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-757661 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-757661 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.024048607s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-757661 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-757661 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-757661 --alsologtostderr -v=3: (12.408684963s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-757661 -n old-k8s-version-757661
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-757661 -n old-k8s-version-757661: exit status 7 (74.997608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-757661 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (143.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-757661 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-757661 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m22.903391175s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-757661 -n old-k8s-version-757661
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (143.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-075949 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1007 14:09:51.654489 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-075949 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m8.216167738s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kcbvf" [82f3de21-5bfc-43ff-af92-cf86d4b6ff4f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004181409s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kcbvf" [82f3de21-5bfc-43ff-af92-cf86d4b6ff4f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004234796s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-757661 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-757661 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-757661 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-757661 -n old-k8s-version-757661
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-757661 -n old-k8s-version-757661: exit status 2 (334.194795ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-757661 -n old-k8s-version-757661
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-757661 -n old-k8s-version-757661: exit status 2 (335.100569ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-757661 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-757661 -n old-k8s-version-757661
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-757661 -n old-k8s-version-757661
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-075949 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ad60cd02-90c7-4dbc-bc5a-8b735a52118f] Pending
helpers_test.go:344: "busybox" [ad60cd02-90c7-4dbc-bc5a-8b735a52118f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ad60cd02-90c7-4dbc-bc5a-8b735a52118f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004826115s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-075949 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-229497 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-229497 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m21.131867877s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-075949 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-075949 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.286459689s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-075949 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-075949 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-075949 --alsologtostderr -v=3: (12.427975174s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-075949 -n no-preload-075949
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-075949 -n no-preload-075949: exit status 7 (108.297957ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-075949 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (330.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-075949 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1007 14:11:56.898188 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-075949 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (5m30.105374594s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-075949 -n no-preload-075949
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (330.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-229497 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bbb1a309-3036-4de8-a325-add8928467c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bbb1a309-3036-4de8-a325-add8928467c4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003467808s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-229497 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-229497 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-229497 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.048433309s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-229497 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-229497 --alsologtostderr -v=3
E1007 14:12:13.830995 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-229497 --alsologtostderr -v=3: (12.038495708s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-229497 -n embed-certs-229497
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-229497 -n embed-certs-229497: exit status 7 (83.03866ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-229497 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (278.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-229497 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1007 14:12:32.144063 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/old-k8s-version-757661/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:12:32.150411 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/old-k8s-version-757661/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:12:32.162168 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/old-k8s-version-757661/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:12:32.183603 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/old-k8s-version-757661/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:12:32.225087 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/old-k8s-version-757661/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:12:32.309492 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/old-k8s-version-757661/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:12:32.471013 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/old-k8s-version-757661/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:12:32.792671 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/old-k8s-version-757661/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:12:33.434660 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/old-k8s-version-757661/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:12:34.716700 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/old-k8s-version-757661/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:12:37.278858 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/old-k8s-version-757661/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:12:42.400751 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/old-k8s-version-757661/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:12:52.642875 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/old-k8s-version-757661/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:13:13.124250 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/old-k8s-version-757661/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:13:54.086047 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/old-k8s-version-757661/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:14:51.654492 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:15:16.011198 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/old-k8s-version-757661/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-229497 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m37.601656726s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-229497 -n embed-certs-229497
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (278.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-d8lsb" [d358a97d-141c-4922-81f9-b38ee2a8745e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003812306s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-d8lsb" [d358a97d-141c-4922-81f9-b38ee2a8745e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004312226s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-075949 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-075949 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-075949 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-075949 -n no-preload-075949
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-075949 -n no-preload-075949: exit status 2 (322.087297ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-075949 -n no-preload-075949
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-075949 -n no-preload-075949: exit status 2 (337.205221ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-075949 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-075949 -n no-preload-075949
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-075949 -n no-preload-075949
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-099610 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-099610 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m25.918834156s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-w267m" [f74736b6-65a4-443c-bde7-bf157f0d8a35] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004897488s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-w267m" [f74736b6-65a4-443c-bde7-bf157f0d8a35] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006124881s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-229497 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-229497 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-229497 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-229497 --alsologtostderr -v=1: (1.067738677s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-229497 -n embed-certs-229497
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-229497 -n embed-certs-229497: exit status 2 (395.887234ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-229497 -n embed-certs-229497
E1007 14:17:13.831077 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-229497 -n embed-certs-229497: exit status 2 (379.177677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-229497 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-229497 -n embed-certs-229497
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-229497 -n embed-certs-229497
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-941892 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1007 14:17:32.143866 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/old-k8s-version-757661/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-941892 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (39.91022687s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-941892 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1007 14:17:59.852598 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/old-k8s-version-757661/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-941892 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.374521998s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-941892 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-941892 --alsologtostderr -v=3: (1.273449602s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-941892 -n newest-cni-941892
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-941892 -n newest-cni-941892: exit status 7 (76.118782ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-941892 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-941892 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-941892 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (16.319326066s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-941892 -n newest-cni-941892
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-099610 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [526b71f5-bed3-4c6f-bf03-96e2e6cee06c] Pending
helpers_test.go:344: "busybox" [526b71f5-bed3-4c6f-bf03-96e2e6cee06c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [526b71f5-bed3-4c6f-bf03-96e2e6cee06c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003958941s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-099610 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-941892 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-941892 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-941892 -n newest-cni-941892
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-941892 -n newest-cni-941892: exit status 2 (383.680996ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-941892 -n newest-cni-941892
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-941892 -n newest-cni-941892: exit status 2 (331.954782ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-941892 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-941892 -n newest-cni-941892
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-941892 -n newest-cni-941892
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (80.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-254617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-254617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m20.636849532s)
--- PASS: TestNetworkPlugins/group/auto/Start (80.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-099610 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-099610 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.248635899s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-099610 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-099610 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-099610 --alsologtostderr -v=3: (12.237287968s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-099610 -n default-k8s-diff-port-099610
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-099610 -n default-k8s-diff-port-099610: exit status 7 (87.111723ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-099610 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (271.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-099610 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1007 14:19:34.727105 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-099610 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m31.003107686s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-099610 -n default-k8s-diff-port-099610
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (271.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-254617 "pgrep -a kubelet"
I1007 14:19:45.930083 1694126 config.go:182] Loaded profile config "auto-254617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-254617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6s4gr" [1b204a1f-0906-416a-999b-2775eed41e67] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6s4gr" [1b204a1f-0906-416a-999b-2775eed41e67] Running
E1007 14:19:51.654333 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004265188s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-254617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-254617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-254617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (47.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-254617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1007 14:20:36.528502 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/no-preload-075949/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:20:36.536430 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/no-preload-075949/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:20:36.547764 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/no-preload-075949/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:20:36.569111 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/no-preload-075949/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:20:36.610438 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/no-preload-075949/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:20:36.692559 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/no-preload-075949/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:20:36.854270 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/no-preload-075949/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:20:37.175581 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/no-preload-075949/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:20:37.817565 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/no-preload-075949/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:20:39.098879 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/no-preload-075949/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:20:41.660555 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/no-preload-075949/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:20:46.782821 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/no-preload-075949/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:20:57.024258 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/no-preload-075949/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-254617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (47.574912679s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (47.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-592gk" [c2c4a126-fcf2-4845-8f83-ef4dfcdffe77] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00389193s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-254617 "pgrep -a kubelet"
I1007 14:21:11.409201 1694126 config.go:182] Loaded profile config "kindnet-254617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-254617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h5g78" [94a0953a-f403-4bc7-9ab5-55e538329d01] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-h5g78" [94a0953a-f403-4bc7-9ab5-55e538329d01] Running
E1007 14:21:17.506620 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/no-preload-075949/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003772192s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-254617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-254617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-254617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (57.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-254617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1007 14:21:58.468606 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/no-preload-075949/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:22:13.830759 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/addons-779469/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:22:32.144380 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/old-k8s-version-757661/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-254617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (57.48816624s)
--- PASS: TestNetworkPlugins/group/calico/Start (57.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8dn4m" [18ef3fb2-5a4e-4feb-b2e8-b856122498e9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005155539s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-254617 "pgrep -a kubelet"
I1007 14:22:47.606712 1694126 config.go:182] Loaded profile config "calico-254617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-254617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qbkk2" [13d0f06f-402f-4d2c-ae8f-04c2416813f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qbkk2" [13d0f06f-402f-4d2c-ae8f-04c2416813f2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004913746s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-254617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-254617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-254617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-d89z5" [2e87eeb9-3df4-4e22-a11c-35c6a069895d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004341772s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-d89z5" [2e87eeb9-3df4-4e22-a11c-35c6a069895d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004125834s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-099610 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (69.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-254617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-254617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m9.922442373s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (69.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-099610 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-099610 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-099610 --alsologtostderr -v=1: (1.051679322s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-099610 -n default-k8s-diff-port-099610
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-099610 -n default-k8s-diff-port-099610: exit status 2 (383.826492ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-099610 -n default-k8s-diff-port-099610
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-099610 -n default-k8s-diff-port-099610: exit status 2 (489.550814ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-099610 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-099610 -n default-k8s-diff-port-099610
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-099610 -n default-k8s-diff-port-099610
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (74.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-254617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-254617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m14.288796021s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (74.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-254617 "pgrep -a kubelet"
I1007 14:24:32.352038 1694126 config.go:182] Loaded profile config "custom-flannel-254617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-254617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-w8h6c" [1f53fbae-ae5a-437b-a71f-420ad983bf7f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-w8h6c" [1f53fbae-ae5a-437b-a71f-420ad983bf7f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.00390749s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-254617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-254617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-254617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-254617 "pgrep -a kubelet"
I1007 14:24:45.800450 1694126 config.go:182] Loaded profile config "enable-default-cni-254617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-254617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zrtj2" [90bc5980-403c-4db9-8e95-3ba2363ba258] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1007 14:24:46.182190 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/auto-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:24:46.188675 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/auto-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:24:46.200088 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/auto-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:24:46.221587 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/auto-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:24:46.263197 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/auto-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:24:46.344794 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/auto-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:24:46.506251 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/auto-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:24:46.828202 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/auto-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:24:47.470400 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/auto-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:24:48.752728 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/auto-254617/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-zrtj2" [90bc5980-403c-4db9-8e95-3ba2363ba258] Running
E1007 14:24:51.314579 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/auto-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:24:51.654687 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/functional-730125/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:24:56.436698 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/auto-254617/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004615702s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-254617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-254617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-254617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-254617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-254617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m3.83973956s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (82.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-254617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1007 14:25:27.160745 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/auto-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:25:36.531650 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/no-preload-075949/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:26:04.232004 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/no-preload-075949/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:26:05.116181 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/kindnet-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:26:05.122581 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/kindnet-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:26:05.133973 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/kindnet-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:26:05.155463 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/kindnet-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:26:05.196839 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/kindnet-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:26:05.278288 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/kindnet-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:26:05.439960 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/kindnet-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:26:05.761644 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/kindnet-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:26:06.403482 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/kindnet-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:26:07.685020 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/kindnet-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:26:08.123029 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/auto-254617/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:26:10.247118 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/kindnet-254617/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-254617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m22.681361055s)
--- PASS: TestNetworkPlugins/group/bridge/Start (82.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-l67rw" [ee12b56e-ab27-4a54-8c71-483c53c0b5eb] Running
E1007 14:26:15.369342 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/kindnet-254617/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003982547s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-254617 "pgrep -a kubelet"
I1007 14:26:17.929617 1694126 config.go:182] Loaded profile config "flannel-254617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-254617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vnq9t" [9765085e-44e8-4a95-bc34-e803af43d938] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vnq9t" [9765085e-44e8-4a95-bc34-e803af43d938] Running
E1007 14:26:25.611040 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/kindnet-254617/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003515707s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-254617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-254617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-254617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-254617 "pgrep -a kubelet"
I1007 14:26:45.221764 1694126 config.go:182] Loaded profile config "bridge-254617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-254617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-982gw" [7491e4f8-ef61-4da6-a730-00f5415e856b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1007 14:26:46.092598 1694126 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/kindnet-254617/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-982gw" [7491e4f8-ef61-4da6-a730-00f5415e856b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.009991033s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-254617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-254617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-254617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (29/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-951215 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-951215" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-951215
--- SKIP: TestDownloadOnlyKic (0.55s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:785: skipping: crio not supported
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-779469 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-080925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-080925
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-254617 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-254617

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-254617

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-254617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-254617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-254617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-254617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-254617

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-254617

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-254617

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-254617

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-254617

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-254617" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-254617" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Oct 2024 13:58:58 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-431816
contexts:
- context:
cluster: kubernetes-upgrade-431816
user: kubernetes-upgrade-431816
name: kubernetes-upgrade-431816
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-431816
user:
client-certificate: /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/kubernetes-upgrade-431816/client.crt
client-key: /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/kubernetes-upgrade-431816/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-254617

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-254617"

                                                
                                                
----------------------- debugLogs end: kubenet-254617 [took: 4.898675261s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-254617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-254617
--- SKIP: TestNetworkPlugins/group/kubenet (5.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-254617 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-254617

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-254617

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-254617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-254617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-254617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-254617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-254617

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-254617

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-254617

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-254617

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-254617

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-254617" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-254617

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-254617

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-254617

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-254617

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-254617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-254617" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18424-1688750/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Oct 2024 14:03:26 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-431816
contexts:
- context:
cluster: kubernetes-upgrade-431816
extensions:
- extension:
last-update: Mon, 07 Oct 2024 14:03:26 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: kubernetes-upgrade-431816
name: kubernetes-upgrade-431816
current-context: kubernetes-upgrade-431816
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-431816
user:
client-certificate: /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/kubernetes-upgrade-431816/client.crt
client-key: /home/jenkins/minikube-integration/18424-1688750/.minikube/profiles/kubernetes-upgrade-431816/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-254617

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-254617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254617"

                                                
                                                
----------------------- debugLogs end: cilium-254617 [took: 4.964682535s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-254617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-254617
--- SKIP: TestNetworkPlugins/group/cilium (5.11s)

                                                
                                    
Copied to clipboard