Test Report: Docker_Linux_crio_arm64 19761

                    
                      b3514a663b846d20eab704dde0dd7737dbedcda0:2024-10-07:36539
                    
                

Test fail (4/328)

Order failed test Duration
32 TestAddons/serial/GCPAuth/PullSecret 480.85
35 TestAddons/parallel/Ingress 152.09
37 TestAddons/parallel/MetricsServer 340.16
174 TestMultiControlPlane/serial/RestartCluster 128.5
x
+
TestAddons/serial/GCPAuth/PullSecret (480.85s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-952725 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-952725 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [09d43b25-cfc7-418e-8afe-3f374b584082] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:329: TestAddons/serial/GCPAuth/PullSecret: WARNING: pod list for "default" "integration-test=busybox" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:627: ***** TestAddons/serial/GCPAuth/PullSecret: pod "integration-test=busybox" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:627: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-952725 -n addons-952725
addons_test.go:627: TestAddons/serial/GCPAuth/PullSecret: showing logs for failed pods as of 2024-10-07 10:45:31.753906885 +0000 UTC m=+706.774118794
addons_test.go:627: (dbg) Run:  kubectl --context addons-952725 describe po busybox -n default
addons_test.go:627: (dbg) kubectl --context addons-952725 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-952725/192.168.58.2
Start Time:       Mon, 07 Oct 2024 10:37:31 +0000
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               10.244.0.21
IPs:
IP:  10.244.0.21
Containers:
busybox:
Container ID:  
Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sleep
3600
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c845b (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-c845b:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m                      default-scheduler  Successfully assigned default/busybox to addons-952725
Normal   Pulling    6m27s (x4 over 8m)      kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning  Failed     6m27s (x4 over 8m)      kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
Warning  Failed     6m27s (x4 over 8m)      kubelet            Error: ErrImagePull
Warning  Failed     6m14s (x6 over 7m59s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    2m58s (x20 over 7m59s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
addons_test.go:627: (dbg) Run:  kubectl --context addons-952725 logs busybox -n default
addons_test.go:627: (dbg) Non-zero exit: kubectl --context addons-952725 logs busybox -n default: exit status 1 (106.072552ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:627: kubectl --context addons-952725 logs busybox -n default: exit status 1
addons_test.go:629: wait: integration-test=busybox within 8m0s: context deadline exceeded
--- FAIL: TestAddons/serial/GCPAuth/PullSecret (480.85s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (152.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-952725 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-952725 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-952725 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [08b32324-5c9a-4c37-bc48-335cb434d154] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [08b32324-5c9a-4c37-bc48-335cb434d154] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003468951s
I1007 10:47:24.504760  896726 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-952725 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-952725 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.770388041s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-952725 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-952725 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.58.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-952725
helpers_test.go:235: (dbg) docker inspect addons-952725:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "85436f7341a901b222d5828da34ca95e82246f0a4a81c195c0b5073b93bab718",
	        "Created": "2024-10-07T10:34:32.746029296Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 898077,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-07T10:34:32.896447337Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/85436f7341a901b222d5828da34ca95e82246f0a4a81c195c0b5073b93bab718/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/85436f7341a901b222d5828da34ca95e82246f0a4a81c195c0b5073b93bab718/hostname",
	        "HostsPath": "/var/lib/docker/containers/85436f7341a901b222d5828da34ca95e82246f0a4a81c195c0b5073b93bab718/hosts",
	        "LogPath": "/var/lib/docker/containers/85436f7341a901b222d5828da34ca95e82246f0a4a81c195c0b5073b93bab718/85436f7341a901b222d5828da34ca95e82246f0a4a81c195c0b5073b93bab718-json.log",
	        "Name": "/addons-952725",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-952725:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-952725",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3d8a7bcef2ac32f4df0c9013de3e47ad32d0e114b8350d154e406c848b627b66-init/diff:/var/lib/docker/overlay2/679cc8fccbb0902884eb141037cc21fc6e7a2efac609a53e07ea6b92675ef1c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3d8a7bcef2ac32f4df0c9013de3e47ad32d0e114b8350d154e406c848b627b66/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3d8a7bcef2ac32f4df0c9013de3e47ad32d0e114b8350d154e406c848b627b66/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3d8a7bcef2ac32f4df0c9013de3e47ad32d0e114b8350d154e406c848b627b66/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-952725",
	                "Source": "/var/lib/docker/volumes/addons-952725/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-952725",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-952725",
	                "name.minikube.sigs.k8s.io": "addons-952725",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cedd26adcb83d6591c102466cf3325c8c8d6a1f49982b318f48abf887874ea83",
	            "SandboxKey": "/var/run/docker/netns/cedd26adcb83",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33882"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33885"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33883"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33884"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-952725": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null,
	                    "NetworkID": "59038097841d5b49aa82228d19fa2dd63453f94c2643413b72a58a1768d2ccbc",
	                    "EndpointID": "49d7f5aa381f835679c25b2e09022cf01d0aadb939f06563e6481ef28f4c53bd",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-952725",
	                        "85436f7341a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-952725 -n addons-952725
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-952725 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-952725 logs -n 25: (1.532603542s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-490247                                                                     | download-only-490247   | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC | 07 Oct 24 10:34 UTC |
	| delete  | -p download-only-777537                                                                     | download-only-777537   | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC | 07 Oct 24 10:34 UTC |
	| start   | --download-only -p                                                                          | download-docker-457065 | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC |                     |
	|         | download-docker-457065                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-457065                                                                   | download-docker-457065 | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC | 07 Oct 24 10:34 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-858793   | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC |                     |
	|         | binary-mirror-858793                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33319                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-858793                                                                     | binary-mirror-858793   | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC | 07 Oct 24 10:34 UTC |
	| addons  | enable dashboard -p                                                                         | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC |                     |
	|         | addons-952725                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC |                     |
	|         | addons-952725                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-952725 --wait=true                                                                | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC | 07 Oct 24 10:37 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-952725 addons disable                                                                | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:37 UTC | 07 Oct 24 10:37 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-952725 addons disable                                                                | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:45 UTC | 07 Oct 24 10:45 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-952725 addons disable                                                                | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:45 UTC | 07 Oct 24 10:45 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ip      | addons-952725 ip                                                                            | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:45 UTC | 07 Oct 24 10:45 UTC |
	| addons  | addons-952725 addons disable                                                                | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:45 UTC | 07 Oct 24 10:45 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:46 UTC | 07 Oct 24 10:46 UTC |
	|         | -p addons-952725                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-952725 ssh cat                                                                       | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:46 UTC | 07 Oct 24 10:46 UTC |
	|         | /opt/local-path-provisioner/pvc-3e4c90e6-24f8-4bf4-8b84-84508c280cb4_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-952725 addons disable                                                                | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:46 UTC | 07 Oct 24 10:46 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-952725 addons                                                                        | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:46 UTC | 07 Oct 24 10:46 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:46 UTC | 07 Oct 24 10:46 UTC |
	|         | -p addons-952725                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-952725 addons disable                                                                | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:46 UTC | 07 Oct 24 10:46 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-952725 addons                                                                        | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:46 UTC | 07 Oct 24 10:46 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-952725 addons                                                                        | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:46 UTC | 07 Oct 24 10:47 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-952725 addons                                                                        | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:47 UTC | 07 Oct 24 10:47 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-952725 ssh curl -s                                                                   | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:47 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-952725 ip                                                                            | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:49 UTC | 07 Oct 24 10:49 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 10:34:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 10:34:25.854356  897600 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:34:25.854565  897600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:34:25.854592  897600 out.go:358] Setting ErrFile to fd 2...
	I1007 10:34:25.854613  897600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:34:25.854877  897600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-891319/.minikube/bin
	I1007 10:34:25.855346  897600 out.go:352] Setting JSON to false
	I1007 10:34:25.856364  897600 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22610,"bootTime":1728274656,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1007 10:34:25.856454  897600 start.go:139] virtualization:  
	I1007 10:34:25.858909  897600 out.go:177] * [addons-952725] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 10:34:25.860971  897600 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 10:34:25.861095  897600 notify.go:220] Checking for updates...
	I1007 10:34:25.864837  897600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:34:25.866677  897600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-891319/kubeconfig
	I1007 10:34:25.868665  897600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-891319/.minikube
	I1007 10:34:25.870395  897600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 10:34:25.872291  897600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 10:34:25.874451  897600 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:34:25.897794  897600 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 10:34:25.897932  897600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 10:34:25.971117  897600 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:54 SystemTime:2024-10-07 10:34:25.961029087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 10:34:25.971244  897600 docker.go:318] overlay module found
	I1007 10:34:25.973837  897600 out.go:177] * Using the docker driver based on user configuration
	I1007 10:34:25.976446  897600 start.go:297] selected driver: docker
	I1007 10:34:25.976468  897600 start.go:901] validating driver "docker" against <nil>
	I1007 10:34:25.976483  897600 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 10:34:25.977096  897600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 10:34:26.031963  897600 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:54 SystemTime:2024-10-07 10:34:26.022332122 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 10:34:26.032198  897600 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 10:34:26.032474  897600 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:34:26.034196  897600 out.go:177] * Using Docker driver with root privileges
	I1007 10:34:26.036119  897600 cni.go:84] Creating CNI manager for ""
	I1007 10:34:26.036185  897600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 10:34:26.036202  897600 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 10:34:26.036333  897600 start.go:340] cluster config:
	{Name:addons-952725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-952725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:34:26.038795  897600 out.go:177] * Starting "addons-952725" primary control-plane node in "addons-952725" cluster
	I1007 10:34:26.040746  897600 cache.go:121] Beginning downloading kic base image for docker with crio
	I1007 10:34:26.042550  897600 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 10:34:26.044710  897600 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:34:26.044780  897600 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-891319/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1007 10:34:26.044793  897600 cache.go:56] Caching tarball of preloaded images
	I1007 10:34:26.044800  897600 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 10:34:26.044874  897600 preload.go:172] Found /home/jenkins/minikube-integration/19761-891319/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1007 10:34:26.044896  897600 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:34:26.045238  897600 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/config.json ...
	I1007 10:34:26.045298  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/config.json: {Name:mkf59b6592a952b92c7d864078e51df503121f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:26.063053  897600 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1007 10:34:26.063079  897600 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1007 10:34:26.063095  897600 cache.go:194] Successfully downloaded all kic artifacts
	I1007 10:34:26.063119  897600 start.go:360] acquireMachinesLock for addons-952725: {Name:mkcdedd8717c093b45d2d5295616e9bf83c44502 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:34:26.063611  897600 start.go:364] duration metric: took 472.086µs to acquireMachinesLock for "addons-952725"
	I1007 10:34:26.063647  897600 start.go:93] Provisioning new machine with config: &{Name:addons-952725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-952725 Namespace:default APIServerHAVIP: APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:34:26.063737  897600 start.go:125] createHost starting for "" (driver="docker")
	I1007 10:34:26.066109  897600 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1007 10:34:26.066406  897600 start.go:159] libmachine.API.Create for "addons-952725" (driver="docker")
	I1007 10:34:26.066456  897600 client.go:168] LocalClient.Create starting
	I1007 10:34:26.066588  897600 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem
	I1007 10:34:26.704392  897600 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/cert.pem
	I1007 10:34:27.261849  897600 cli_runner.go:164] Run: docker network inspect addons-952725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1007 10:34:27.276764  897600 cli_runner.go:211] docker network inspect addons-952725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1007 10:34:27.276852  897600 network_create.go:284] running [docker network inspect addons-952725] to gather additional debugging logs...
	I1007 10:34:27.276875  897600 cli_runner.go:164] Run: docker network inspect addons-952725
	W1007 10:34:27.290121  897600 cli_runner.go:211] docker network inspect addons-952725 returned with exit code 1
	I1007 10:34:27.290155  897600 network_create.go:287] error running [docker network inspect addons-952725]: docker network inspect addons-952725: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-952725 not found
	I1007 10:34:27.290170  897600 network_create.go:289] output of [docker network inspect addons-952725]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-952725 not found
	
	** /stderr **
	I1007 10:34:27.290269  897600 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 10:34:27.303965  897600 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fa98f111c271 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:cf:52:8b:17} reservation:<nil>}
	I1007 10:34:27.304436  897600 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004d78e0}
	I1007 10:34:27.304464  897600 network_create.go:124] attempt to create docker network addons-952725 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1007 10:34:27.304520  897600 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-952725 addons-952725
	I1007 10:34:27.374295  897600 network_create.go:108] docker network addons-952725 192.168.58.0/24 created
	I1007 10:34:27.374327  897600 kic.go:121] calculated static IP "192.168.58.2" for the "addons-952725" container
	I1007 10:34:27.374411  897600 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1007 10:34:27.388673  897600 cli_runner.go:164] Run: docker volume create addons-952725 --label name.minikube.sigs.k8s.io=addons-952725 --label created_by.minikube.sigs.k8s.io=true
	I1007 10:34:27.405988  897600 oci.go:103] Successfully created a docker volume addons-952725
	I1007 10:34:27.406098  897600 cli_runner.go:164] Run: docker run --rm --name addons-952725-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-952725 --entrypoint /usr/bin/test -v addons-952725:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib
	I1007 10:34:28.565642  897600 cli_runner.go:217] Completed: docker run --rm --name addons-952725-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-952725 --entrypoint /usr/bin/test -v addons-952725:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib: (1.159496599s)
	I1007 10:34:28.565673  897600 oci.go:107] Successfully prepared a docker volume addons-952725
	I1007 10:34:28.565692  897600 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:34:28.565712  897600 kic.go:194] Starting extracting preloaded images to volume ...
	I1007 10:34:28.565784  897600 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19761-891319/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-952725:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir
	I1007 10:34:32.670907  897600 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19761-891319/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-952725:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir: (4.105070585s)
	I1007 10:34:32.670939  897600 kic.go:203] duration metric: took 4.105224208s to extract preloaded images to volume ...
	W1007 10:34:32.671084  897600 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1007 10:34:32.671203  897600 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1007 10:34:32.732041  897600 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-952725 --name addons-952725 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-952725 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-952725 --network addons-952725 --ip 192.168.58.2 --volume addons-952725:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122
	I1007 10:34:33.105474  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Running}}
	I1007 10:34:33.131196  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:34:33.154921  897600 cli_runner.go:164] Run: docker exec addons-952725 stat /var/lib/dpkg/alternatives/iptables
	I1007 10:34:33.218329  897600 oci.go:144] the created container "addons-952725" has a running status.
	I1007 10:34:33.218361  897600 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa...
	I1007 10:34:33.625637  897600 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1007 10:34:33.656580  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:34:33.679580  897600 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1007 10:34:33.679605  897600 kic_runner.go:114] Args: [docker exec --privileged addons-952725 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1007 10:34:33.781306  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:34:33.805652  897600 machine.go:93] provisionDockerMachine start ...
	I1007 10:34:33.805744  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:34:33.828114  897600 main.go:141] libmachine: Using SSH client type: native
	I1007 10:34:33.828534  897600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I1007 10:34:33.828556  897600 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 10:34:34.000558  897600 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-952725
	
	I1007 10:34:34.000612  897600 ubuntu.go:169] provisioning hostname "addons-952725"
	I1007 10:34:34.000688  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:34:34.040547  897600 main.go:141] libmachine: Using SSH client type: native
	I1007 10:34:34.040808  897600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I1007 10:34:34.040824  897600 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-952725 && echo "addons-952725" | sudo tee /etc/hostname
	I1007 10:34:34.202347  897600 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-952725
	
	I1007 10:34:34.202429  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:34:34.223128  897600 main.go:141] libmachine: Using SSH client type: native
	I1007 10:34:34.223385  897600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I1007 10:34:34.223406  897600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-952725' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-952725/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-952725' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:34:34.364938  897600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:34:34.364969  897600 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19761-891319/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-891319/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-891319/.minikube}
	I1007 10:34:34.364993  897600 ubuntu.go:177] setting up certificates
	I1007 10:34:34.365005  897600 provision.go:84] configureAuth start
	I1007 10:34:34.365070  897600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-952725
	I1007 10:34:34.383190  897600 provision.go:143] copyHostCerts
	I1007 10:34:34.383283  897600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-891319/.minikube/cert.pem (1123 bytes)
	I1007 10:34:34.383408  897600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-891319/.minikube/key.pem (1679 bytes)
	I1007 10:34:34.383471  897600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-891319/.minikube/ca.pem (1078 bytes)
	I1007 10:34:34.383520  897600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca-key.pem org=jenkins.addons-952725 san=[127.0.0.1 192.168.58.2 addons-952725 localhost minikube]
	I1007 10:34:35.214306  897600 provision.go:177] copyRemoteCerts
	I1007 10:34:35.214377  897600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:34:35.214434  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:34:35.231529  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:34:35.329338  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1007 10:34:35.354596  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 10:34:35.378606  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 10:34:35.403609  897600 provision.go:87] duration metric: took 1.038589196s to configureAuth
	I1007 10:34:35.403634  897600 ubuntu.go:193] setting minikube options for container-runtime
	I1007 10:34:35.403815  897600 config.go:182] Loaded profile config "addons-952725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:34:35.403920  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:34:35.421209  897600 main.go:141] libmachine: Using SSH client type: native
	I1007 10:34:35.421476  897600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I1007 10:34:35.421501  897600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:34:35.691031  897600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:34:35.691057  897600 machine.go:96] duration metric: took 1.885383979s to provisionDockerMachine
	I1007 10:34:35.691068  897600 client.go:171] duration metric: took 9.624601333s to LocalClient.Create
	I1007 10:34:35.691080  897600 start.go:167] duration metric: took 9.624675999s to libmachine.API.Create "addons-952725"
	I1007 10:34:35.691087  897600 start.go:293] postStartSetup for "addons-952725" (driver="docker")
	I1007 10:34:35.691098  897600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:34:35.691166  897600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:34:35.691215  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:34:35.708690  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:34:35.805535  897600 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:34:35.808973  897600 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 10:34:35.809010  897600 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 10:34:35.809022  897600 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 10:34:35.809030  897600 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 10:34:35.809041  897600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-891319/.minikube/addons for local assets ...
	I1007 10:34:35.809111  897600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-891319/.minikube/files for local assets ...
	I1007 10:34:35.809140  897600 start.go:296] duration metric: took 118.047143ms for postStartSetup
	I1007 10:34:35.809468  897600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-952725
	I1007 10:34:35.825970  897600 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/config.json ...
	I1007 10:34:35.826256  897600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 10:34:35.826321  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:34:35.842154  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:34:35.937705  897600 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 10:34:35.941963  897600 start.go:128] duration metric: took 9.878201327s to createHost
	I1007 10:34:35.941989  897600 start.go:83] releasing machines lock for "addons-952725", held for 9.878361146s
	I1007 10:34:35.942059  897600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-952725
	I1007 10:34:35.957804  897600 ssh_runner.go:195] Run: cat /version.json
	I1007 10:34:35.957856  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:34:35.957871  897600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:34:35.957946  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:34:35.976535  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:34:35.978203  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:34:36.201925  897600 ssh_runner.go:195] Run: systemctl --version
	I1007 10:34:36.206364  897600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:34:36.351874  897600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 10:34:36.355980  897600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:34:36.376054  897600 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1007 10:34:36.376132  897600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:34:36.412681  897600 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1007 10:34:36.412704  897600 start.go:495] detecting cgroup driver to use...
	I1007 10:34:36.412767  897600 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 10:34:36.412849  897600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:34:36.428959  897600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:34:36.441124  897600 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:34:36.441253  897600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:34:36.455596  897600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:34:36.471253  897600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:34:36.563469  897600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:34:36.651979  897600 docker.go:233] disabling docker service ...
	I1007 10:34:36.652098  897600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:34:36.674644  897600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:34:36.686862  897600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:34:36.774305  897600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:34:36.871817  897600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:34:36.883806  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:34:36.900022  897600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:34:36.900117  897600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:34:36.909884  897600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:34:36.909976  897600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:34:36.920181  897600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:34:36.930152  897600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:34:36.940174  897600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:34:36.949415  897600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:34:36.959126  897600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:34:36.974614  897600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:34:36.984623  897600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:34:36.993100  897600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:34:37.001596  897600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:34:37.104150  897600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:34:37.230722  897600 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:34:37.230828  897600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:34:37.235089  897600 start.go:563] Will wait 60s for crictl version
	I1007 10:34:37.235151  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:34:37.238496  897600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:34:37.282863  897600 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1007 10:34:37.283022  897600 ssh_runner.go:195] Run: crio --version
	I1007 10:34:37.322568  897600 ssh_runner.go:195] Run: crio --version
	I1007 10:34:37.364316  897600 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1007 10:34:37.366598  897600 cli_runner.go:164] Run: docker network inspect addons-952725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 10:34:37.385012  897600 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1007 10:34:37.388707  897600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:34:37.399314  897600 kubeadm.go:883] updating cluster {Name:addons-952725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-952725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 10:34:37.399439  897600 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:34:37.399500  897600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:34:37.479505  897600 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 10:34:37.479530  897600 crio.go:433] Images already preloaded, skipping extraction
	I1007 10:34:37.479599  897600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:34:37.520982  897600 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 10:34:37.521007  897600 cache_images.go:84] Images are preloaded, skipping loading
	I1007 10:34:37.521016  897600 kubeadm.go:934] updating node { 192.168.58.2 8443 v1.31.1 crio true true} ...
	I1007 10:34:37.521121  897600 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-952725 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-952725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:34:37.521205  897600 ssh_runner.go:195] Run: crio config
	I1007 10:34:37.578407  897600 cni.go:84] Creating CNI manager for ""
	I1007 10:34:37.578430  897600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 10:34:37.578440  897600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 10:34:37.578463  897600 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-952725 NodeName:addons-952725 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 10:34:37.578653  897600 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-952725"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 10:34:37.578731  897600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:34:37.587610  897600 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 10:34:37.587699  897600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 10:34:37.596427  897600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1007 10:34:37.618772  897600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:34:37.637690  897600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I1007 10:34:37.655523  897600 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1007 10:34:37.658796  897600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:34:37.669252  897600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:34:37.761285  897600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:34:37.774877  897600 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725 for IP: 192.168.58.2
	I1007 10:34:37.774899  897600 certs.go:194] generating shared ca certs ...
	I1007 10:34:37.774916  897600 certs.go:226] acquiring lock for ca certs: {Name:mkd5251b1f18df70f58bf1f19694372431d4d649 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:37.775091  897600 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-891319/.minikube/ca.key
	I1007 10:34:38.305463  897600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt ...
	I1007 10:34:38.305497  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt: {Name:mk93666416897e119b1c7611486743cb173bf559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:38.306130  897600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-891319/.minikube/ca.key ...
	I1007 10:34:38.306148  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/ca.key: {Name:mk7769eb9aca2ab5423ab4e9e83760bf16b8dd6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:38.306610  897600 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.key
	I1007 10:34:38.517389  897600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.crt ...
	I1007 10:34:38.517420  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.crt: {Name:mk9593b4c767762d1540e6bc17c44307405443b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:38.517599  897600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.key ...
	I1007 10:34:38.517612  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.key: {Name:mk04f7de9ef03f0008a9de352a11a4dbd27d9456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:38.517696  897600 certs.go:256] generating profile certs ...
	I1007 10:34:38.517755  897600 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.key
	I1007 10:34:38.517784  897600 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt with IP's: []
	I1007 10:34:39.103205  897600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt ...
	I1007 10:34:39.103242  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: {Name:mkd8c660c7eacf949a715ce338f3534540eca313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:39.103494  897600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.key ...
	I1007 10:34:39.103511  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.key: {Name:mkdd59bcfee1302dc0f4731e9ceb73455d6a260d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:39.103611  897600 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.key.602ce869
	I1007 10:34:39.103633  897600 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.crt.602ce869 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.58.2]
	I1007 10:34:39.636086  897600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.crt.602ce869 ...
	I1007 10:34:39.636117  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.crt.602ce869: {Name:mkf2607d021c46799287a2dc8937fc01202b5d2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:39.636720  897600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.key.602ce869 ...
	I1007 10:34:39.636737  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.key.602ce869: {Name:mke98d50a7eee0613c19d1f8c4a8cd3d3966dad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:39.636828  897600 certs.go:381] copying /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.crt.602ce869 -> /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.crt
	I1007 10:34:39.636907  897600 certs.go:385] copying /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.key.602ce869 -> /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.key
	I1007 10:34:39.636963  897600 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/proxy-client.key
	I1007 10:34:39.636984  897600 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/proxy-client.crt with IP's: []
	I1007 10:34:40.109374  897600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/proxy-client.crt ...
	I1007 10:34:40.109420  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/proxy-client.crt: {Name:mk5638c1eeee4f7a6321f7ae9ae10926fbd937ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:40.110062  897600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/proxy-client.key ...
	I1007 10:34:40.110086  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/proxy-client.key: {Name:mk487f6e978793460c0139297d52f64610384699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:40.110665  897600 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 10:34:40.110717  897600 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem (1078 bytes)
	I1007 10:34:40.110745  897600 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:34:40.110777  897600 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/key.pem (1679 bytes)
	I1007 10:34:40.111501  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:34:40.142516  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 10:34:40.171141  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:34:40.198289  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 10:34:40.227477  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 10:34:40.253377  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 10:34:40.278121  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:34:40.302122  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:34:40.325789  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:34:40.349729  897600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 10:34:40.367170  897600 ssh_runner.go:195] Run: openssl version
	I1007 10:34:40.372438  897600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:34:40.381737  897600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:34:40.385164  897600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:34:40.385234  897600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:34:40.392059  897600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:34:40.401641  897600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:34:40.405030  897600 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 10:34:40.405079  897600 kubeadm.go:392] StartCluster: {Name:addons-952725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-952725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:34:40.405167  897600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 10:34:40.405233  897600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 10:34:40.445507  897600 cri.go:89] found id: ""
	I1007 10:34:40.445581  897600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 10:34:40.454344  897600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 10:34:40.463365  897600 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1007 10:34:40.463430  897600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 10:34:40.472286  897600 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 10:34:40.472309  897600 kubeadm.go:157] found existing configuration files:
	
	I1007 10:34:40.472389  897600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 10:34:40.481548  897600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 10:34:40.481615  897600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 10:34:40.490460  897600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 10:34:40.499424  897600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 10:34:40.499491  897600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 10:34:40.508366  897600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 10:34:40.516981  897600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 10:34:40.517046  897600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 10:34:40.525579  897600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 10:34:40.534285  897600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 10:34:40.534354  897600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 10:34:40.542734  897600 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1007 10:34:40.583862  897600 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 10:34:40.584366  897600 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 10:34:40.605024  897600 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1007 10:34:40.605098  897600 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1007 10:34:40.605137  897600 kubeadm.go:310] OS: Linux
	I1007 10:34:40.605185  897600 kubeadm.go:310] CGROUPS_CPU: enabled
	I1007 10:34:40.605237  897600 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1007 10:34:40.605289  897600 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1007 10:34:40.605344  897600 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1007 10:34:40.605394  897600 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1007 10:34:40.605448  897600 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1007 10:34:40.605496  897600 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1007 10:34:40.605547  897600 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1007 10:34:40.605596  897600 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1007 10:34:40.667637  897600 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 10:34:40.667753  897600 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 10:34:40.667849  897600 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 10:34:40.674746  897600 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 10:34:40.677687  897600 out.go:235]   - Generating certificates and keys ...
	I1007 10:34:40.677797  897600 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 10:34:40.677867  897600 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 10:34:41.340980  897600 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 10:34:41.964928  897600 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 10:34:42.554719  897600 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 10:34:43.086933  897600 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 10:34:43.634446  897600 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 10:34:43.634818  897600 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-952725 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1007 10:34:44.317873  897600 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 10:34:44.318200  897600 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-952725 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1007 10:34:44.638496  897600 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 10:34:45.692947  897600 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 10:34:46.401030  897600 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 10:34:46.401264  897600 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 10:34:46.952861  897600 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 10:34:47.595990  897600 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 10:34:47.853874  897600 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 10:34:48.592020  897600 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 10:34:48.931891  897600 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 10:34:48.932734  897600 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 10:34:48.936084  897600 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 10:34:48.938193  897600 out.go:235]   - Booting up control plane ...
	I1007 10:34:48.938296  897600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 10:34:48.938372  897600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 10:34:48.939168  897600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 10:34:48.954182  897600 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 10:34:48.962690  897600 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 10:34:48.962745  897600 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 10:34:49.074144  897600 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 10:34:49.074263  897600 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 10:34:50.075334  897600 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001299143s
	I1007 10:34:50.075427  897600 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 10:34:56.577729  897600 kubeadm.go:310] [api-check] The API server is healthy after 6.50235007s
	I1007 10:34:56.598557  897600 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 10:34:56.611630  897600 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 10:34:56.637098  897600 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 10:34:56.637305  897600 kubeadm.go:310] [mark-control-plane] Marking the node addons-952725 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 10:34:56.648447  897600 kubeadm.go:310] [bootstrap-token] Using token: cz42ba.qa48rmy52comk8ew
	I1007 10:34:56.650159  897600 out.go:235]   - Configuring RBAC rules ...
	I1007 10:34:56.650276  897600 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 10:34:56.655665  897600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 10:34:56.663923  897600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 10:34:56.667814  897600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 10:34:56.671515  897600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 10:34:56.676762  897600 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 10:34:56.985635  897600 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 10:34:57.425085  897600 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 10:34:57.986670  897600 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 10:34:57.987585  897600 kubeadm.go:310] 
	I1007 10:34:57.987661  897600 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 10:34:57.987667  897600 kubeadm.go:310] 
	I1007 10:34:57.987743  897600 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 10:34:57.987747  897600 kubeadm.go:310] 
	I1007 10:34:57.987772  897600 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 10:34:57.987830  897600 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 10:34:57.987880  897600 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 10:34:57.987885  897600 kubeadm.go:310] 
	I1007 10:34:57.987937  897600 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 10:34:57.987942  897600 kubeadm.go:310] 
	I1007 10:34:57.987989  897600 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 10:34:57.988001  897600 kubeadm.go:310] 
	I1007 10:34:57.988053  897600 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 10:34:57.988126  897600 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 10:34:57.988193  897600 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 10:34:57.988198  897600 kubeadm.go:310] 
	I1007 10:34:57.988299  897600 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 10:34:57.988375  897600 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 10:34:57.988380  897600 kubeadm.go:310] 
	I1007 10:34:57.988462  897600 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cz42ba.qa48rmy52comk8ew \
	I1007 10:34:57.988562  897600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e053423b4b9af82cd91e46de0bbe14eaf8715f10cf4af6e7a1673303d5155913 \
	I1007 10:34:57.988583  897600 kubeadm.go:310] 	--control-plane 
	I1007 10:34:57.988588  897600 kubeadm.go:310] 
	I1007 10:34:57.988882  897600 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 10:34:57.988892  897600 kubeadm.go:310] 
	I1007 10:34:57.988973  897600 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cz42ba.qa48rmy52comk8ew \
	I1007 10:34:57.989078  897600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e053423b4b9af82cd91e46de0bbe14eaf8715f10cf4af6e7a1673303d5155913 
	I1007 10:34:57.991771  897600 kubeadm.go:310] W1007 10:34:40.580006    1179 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 10:34:57.992071  897600 kubeadm.go:310] W1007 10:34:40.581256    1179 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 10:34:57.992305  897600 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1007 10:34:57.992417  897600 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 10:34:57.992442  897600 cni.go:84] Creating CNI manager for ""
	I1007 10:34:57.992453  897600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 10:34:57.994737  897600 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 10:34:57.996369  897600 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 10:34:58.001033  897600 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 10:34:58.001053  897600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 10:34:58.024975  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 10:34:58.292211  897600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 10:34:58.292417  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:34:58.292527  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-952725 minikube.k8s.io/updated_at=2024_10_07T10_34_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f minikube.k8s.io/name=addons-952725 minikube.k8s.io/primary=true
	I1007 10:34:58.414768  897600 ops.go:34] apiserver oom_adj: -16
	I1007 10:34:58.414985  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:34:58.915580  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:34:59.415833  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:34:59.915562  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:35:00.417978  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:35:00.915364  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:35:01.415119  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:35:01.915393  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:35:02.415839  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:35:02.524408  897600 kubeadm.go:1113] duration metric: took 4.232056186s to wait for elevateKubeSystemPrivileges
	I1007 10:35:02.524441  897600 kubeadm.go:394] duration metric: took 22.119365147s to StartCluster
	I1007 10:35:02.524459  897600 settings.go:142] acquiring lock: {Name:mka20a3e6b00d8e089bb672b1d6ff1f77b6f764a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:35:02.524583  897600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19761-891319/kubeconfig
	I1007 10:35:02.524946  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/kubeconfig: {Name:mk44557a7348260d019750a5a9dae3060b2fe543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:35:02.525609  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 10:35:02.525644  897600 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:35:02.525867  897600 config.go:182] Loaded profile config "addons-952725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:35:02.525915  897600 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1007 10:35:02.525990  897600 addons.go:69] Setting yakd=true in profile "addons-952725"
	I1007 10:35:02.526004  897600 addons.go:234] Setting addon yakd=true in "addons-952725"
	I1007 10:35:02.526030  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.526474  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.526962  897600 addons.go:69] Setting inspektor-gadget=true in profile "addons-952725"
	I1007 10:35:02.526986  897600 addons.go:234] Setting addon inspektor-gadget=true in "addons-952725"
	I1007 10:35:02.527011  897600 addons.go:69] Setting metrics-server=true in profile "addons-952725"
	I1007 10:35:02.527034  897600 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-952725"
	I1007 10:35:02.527062  897600 addons.go:234] Setting addon metrics-server=true in "addons-952725"
	I1007 10:35:02.527080  897600 addons.go:69] Setting gcp-auth=true in profile "addons-952725"
	I1007 10:35:02.527104  897600 mustload.go:65] Loading cluster: addons-952725
	I1007 10:35:02.527141  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.527266  897600 config.go:182] Loaded profile config "addons-952725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:35:02.527496  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.527651  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.528021  897600 addons.go:69] Setting ingress=true in profile "addons-952725"
	I1007 10:35:02.528040  897600 addons.go:234] Setting addon ingress=true in "addons-952725"
	I1007 10:35:02.528082  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.528523  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.533286  897600 addons.go:69] Setting ingress-dns=true in profile "addons-952725"
	I1007 10:35:02.533324  897600 addons.go:234] Setting addon ingress-dns=true in "addons-952725"
	I1007 10:35:02.533368  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.533844  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.540357  897600 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-952725"
	I1007 10:35:02.540424  897600 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-952725"
	I1007 10:35:02.540482  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.540986  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.548226  897600 out.go:177] * Verifying Kubernetes components...
	I1007 10:35:02.551743  897600 addons.go:69] Setting registry=true in profile "addons-952725"
	I1007 10:35:02.551780  897600 addons.go:234] Setting addon registry=true in "addons-952725"
	I1007 10:35:02.551814  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.552319  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.564160  897600 addons.go:69] Setting storage-provisioner=true in profile "addons-952725"
	I1007 10:35:02.564191  897600 addons.go:234] Setting addon storage-provisioner=true in "addons-952725"
	I1007 10:35:02.564239  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.564757  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.571827  897600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:35:02.594410  897600 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-952725"
	I1007 10:35:02.594480  897600 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-952725"
	I1007 10:35:02.594808  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.595099  897600 addons.go:69] Setting volumesnapshots=true in profile "addons-952725"
	I1007 10:35:02.595144  897600 addons.go:234] Setting addon volumesnapshots=true in "addons-952725"
	I1007 10:35:02.595187  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.595589  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.527018  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.615478  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.617139  897600 addons.go:69] Setting volcano=true in profile "addons-952725"
	I1007 10:35:02.617203  897600 addons.go:234] Setting addon volcano=true in "addons-952725"
	I1007 10:35:02.617266  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.617793  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.527026  897600 addons.go:69] Setting cloud-spanner=true in profile "addons-952725"
	I1007 10:35:02.635803  897600 addons.go:234] Setting addon cloud-spanner=true in "addons-952725"
	I1007 10:35:02.635880  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.527073  897600 addons.go:69] Setting default-storageclass=true in profile "addons-952725"
	I1007 10:35:02.641706  897600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-952725"
	I1007 10:35:02.642123  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.527066  897600 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-952725"
	I1007 10:35:02.670115  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.670602  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.680169  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.682894  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.711793  897600 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1007 10:35:02.713578  897600 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 10:35:02.713600  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1007 10:35:02.713663  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:02.764432  897600 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 10:35:02.764623  897600 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1007 10:35:02.769694  897600 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1007 10:35:02.771988  897600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 10:35:02.773899  897600 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 10:35:02.774068  897600 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 10:35:02.774082  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 10:35:02.774144  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:02.800137  897600 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1007 10:35:02.805518  897600 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 10:35:02.805543  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1007 10:35:02.805611  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:02.806114  897600 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 10:35:02.806148  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1007 10:35:02.806204  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:02.839745  897600 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1007 10:35:02.839943  897600 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1007 10:35:02.841458  897600 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 10:35:02.841487  897600 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 10:35:02.841554  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:02.848054  897600 out.go:177]   - Using image docker.io/registry:2.8.3
	I1007 10:35:02.851778  897600 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1007 10:35:02.851878  897600 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1007 10:35:02.851952  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:02.866764  897600 addons.go:234] Setting addon default-storageclass=true in "addons-952725"
	I1007 10:35:02.866820  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.871123  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.875619  897600 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1007 10:35:02.878353  897600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1007 10:35:02.878378  897600 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1007 10:35:02.878448  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:02.904567  897600 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-952725"
	I1007 10:35:02.904674  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.905173  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.911002  897600 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1007 10:35:02.911025  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1007 10:35:02.911097  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	W1007 10:35:02.934821  897600 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1007 10:35:02.939169  897600 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1007 10:35:02.941107  897600 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1007 10:35:02.941133  897600 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1007 10:35:02.941210  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:02.942118  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:02.965910  897600 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1007 10:35:02.968956  897600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1007 10:35:02.969148  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:02.969938  897600 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1007 10:35:02.969951  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1007 10:35:02.970007  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:02.975800  897600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:35:02.976259  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 10:35:02.984888  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:02.988490  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:02.989042  897600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1007 10:35:02.993849  897600 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1007 10:35:02.996863  897600 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1007 10:35:03.002757  897600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1007 10:35:03.005336  897600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1007 10:35:03.012461  897600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1007 10:35:03.014735  897600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1007 10:35:03.019463  897600 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1007 10:35:03.019508  897600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1007 10:35:03.019590  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:03.072600  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:03.075320  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:03.083799  897600 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1007 10:35:03.089314  897600 out.go:177]   - Using image docker.io/busybox:stable
	I1007 10:35:03.094587  897600 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 10:35:03.094615  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1007 10:35:03.094685  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:03.115359  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:03.133281  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:03.144073  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:03.144933  897600 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 10:35:03.144947  897600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 10:35:03.145007  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:03.145338  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:03.170693  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:03.173704  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:03.201878  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	W1007 10:35:03.202860  897600 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1007 10:35:03.202887  897600 retry.go:31] will retry after 220.911747ms: ssh: handshake failed: EOF
	I1007 10:35:03.376353  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 10:35:03.483504  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 10:35:03.489681  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 10:35:03.516936  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 10:35:03.537445  897600 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1007 10:35:03.537516  897600 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1007 10:35:03.586679  897600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 10:35:03.586739  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1007 10:35:03.592022  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 10:35:03.609876  897600 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1007 10:35:03.609947  897600 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1007 10:35:03.673885  897600 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1007 10:35:03.673955  897600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1007 10:35:03.686503  897600 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1007 10:35:03.686580  897600 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1007 10:35:03.693012  897600 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1007 10:35:03.693082  897600 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1007 10:35:03.700417  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1007 10:35:03.727830  897600 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1007 10:35:03.727902  897600 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1007 10:35:03.799625  897600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 10:35:03.799698  897600 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 10:35:03.810879  897600 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1007 10:35:03.810948  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1007 10:35:03.825451  897600 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1007 10:35:03.825526  897600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1007 10:35:03.833428  897600 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1007 10:35:03.833502  897600 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1007 10:35:03.889881  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 10:35:03.895171  897600 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1007 10:35:03.895281  897600 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1007 10:35:03.965772  897600 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1007 10:35:03.965850  897600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1007 10:35:03.967208  897600 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1007 10:35:03.967274  897600 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1007 10:35:03.970676  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1007 10:35:04.011302  897600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 10:35:04.011387  897600 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 10:35:04.026501  897600 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1007 10:35:04.026578  897600 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1007 10:35:04.057745  897600 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1007 10:35:04.057826  897600 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1007 10:35:04.109056  897600 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1007 10:35:04.109132  897600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1007 10:35:04.155197  897600 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1007 10:35:04.155271  897600 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1007 10:35:04.174135  897600 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1007 10:35:04.174210  897600 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1007 10:35:04.201660  897600 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1007 10:35:04.201735  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1007 10:35:04.214759  897600 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1007 10:35:04.214835  897600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1007 10:35:04.245445  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 10:35:04.276108  897600 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 10:35:04.276180  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1007 10:35:04.279501  897600 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1007 10:35:04.279571  897600 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1007 10:35:04.336538  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1007 10:35:04.339381  897600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1007 10:35:04.339451  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1007 10:35:04.355299  897600 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1007 10:35:04.355377  897600 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1007 10:35:04.382326  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 10:35:04.412722  897600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1007 10:35:04.412799  897600 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1007 10:35:04.448799  897600 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1007 10:35:04.448869  897600 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1007 10:35:04.493273  897600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1007 10:35:04.493347  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1007 10:35:04.646126  897600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1007 10:35:04.646194  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1007 10:35:04.655675  897600 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 10:35:04.655735  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1007 10:35:04.815091  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 10:35:04.882845  897600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 10:35:04.882879  897600 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1007 10:35:05.027386  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 10:35:05.939120  897600 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.962835449s)
	I1007 10:35:05.939195  897600 start.go:971] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1007 10:35:05.939368  897600 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.963549002s)
	I1007 10:35:05.940701  897600 node_ready.go:35] waiting up to 6m0s for node "addons-952725" to be "Ready" ...
	I1007 10:35:06.802060  897600 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-952725" context rescaled to 1 replicas
	I1007 10:35:07.957497  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:08.746978  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.369542883s)
	I1007 10:35:08.747015  897600 addons.go:475] Verifying addon ingress=true in "addons-952725"
	I1007 10:35:08.747405  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.263814258s)
	I1007 10:35:08.747586  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.257834912s)
	I1007 10:35:08.747639  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.230645666s)
	I1007 10:35:08.747715  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.155608157s)
	I1007 10:35:08.747768  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.04728501s)
	I1007 10:35:08.747828  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.85788165s)
	I1007 10:35:08.747911  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.77717219s)
	I1007 10:35:08.747925  897600 addons.go:475] Verifying addon registry=true in "addons-952725"
	I1007 10:35:08.748015  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.502498777s)
	I1007 10:35:08.748028  897600 addons.go:475] Verifying addon metrics-server=true in "addons-952725"
	I1007 10:35:08.748071  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.411461291s)
	I1007 10:35:08.749248  897600 out.go:177] * Verifying ingress addon...
	I1007 10:35:08.749323  897600 out.go:177] * Verifying registry addon...
	I1007 10:35:08.750430  897600 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-952725 service yakd-dashboard -n yakd-dashboard
	
	I1007 10:35:08.751315  897600 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1007 10:35:08.751352  897600 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1007 10:35:08.766074  897600 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 10:35:08.766134  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:08.772734  897600 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1007 10:35:08.772755  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1007 10:35:08.778735  897600 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1007 10:35:08.793893  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.411481213s)
	W1007 10:35:08.794230  897600 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 10:35:08.794282  897600 retry.go:31] will retry after 125.505555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 10:35:08.794064  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.978939003s)
	I1007 10:35:08.920327  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 10:35:09.182721  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.155233855s)
	I1007 10:35:09.182756  897600 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-952725"
	I1007 10:35:09.184794  897600 out.go:177] * Verifying csi-hostpath-driver addon...
	I1007 10:35:09.187263  897600 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1007 10:35:09.201379  897600 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 10:35:09.201406  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:09.304635  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:09.306552  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:09.691974  897600 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 10:35:09.692055  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:09.792859  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:09.793067  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:10.192620  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:10.256668  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:10.257569  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:10.444924  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:10.692103  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:10.792043  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:10.794090  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:11.191914  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:11.255246  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:11.256911  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:11.696569  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:11.761158  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:11.761884  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:11.961572  897600 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1007 10:35:11.961677  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:11.985927  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:12.153214  897600 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1007 10:35:12.190014  897600 addons.go:234] Setting addon gcp-auth=true in "addons-952725"
	I1007 10:35:12.190079  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:12.190558  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:12.199361  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:12.208710  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.288326081s)
	I1007 10:35:12.220493  897600 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1007 10:35:12.220551  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:12.242276  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:12.262363  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:12.264013  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:12.362417  897600 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1007 10:35:12.364386  897600 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 10:35:12.366124  897600 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1007 10:35:12.366178  897600 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1007 10:35:12.390675  897600 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1007 10:35:12.390698  897600 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1007 10:35:12.411513  897600 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 10:35:12.411533  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1007 10:35:12.432763  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 10:35:12.691663  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:12.758252  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:12.759029  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:12.946840  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:13.111159  897600 addons.go:475] Verifying addon gcp-auth=true in "addons-952725"
	I1007 10:35:13.113394  897600 out.go:177] * Verifying gcp-auth addon...
	I1007 10:35:13.116487  897600 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1007 10:35:13.119702  897600 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1007 10:35:13.119723  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:13.192603  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:13.256809  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:13.257747  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:13.620606  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:13.691487  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:13.755605  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:13.756775  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:14.120100  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:14.190992  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:14.255818  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:14.256883  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:14.620582  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:14.690952  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:14.755471  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:14.756463  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:15.120597  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:15.192926  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:15.256178  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:15.257136  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:15.444976  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:15.619980  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:15.690837  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:15.755244  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:15.755928  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:16.120603  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:16.191512  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:16.255250  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:16.256005  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:16.620336  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:16.690606  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:16.755468  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:16.757117  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:17.119517  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:17.192605  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:17.255937  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:17.256270  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:17.621005  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:17.690927  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:17.755359  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:17.756052  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:17.944383  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:18.119729  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:18.191575  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:18.255118  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:18.255806  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:18.619683  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:18.691159  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:18.755966  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:18.756453  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:19.119968  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:19.191378  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:19.254984  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:19.256050  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:19.620195  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:19.691360  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:19.755009  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:19.755810  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:20.119987  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:20.191151  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:20.254722  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:20.255393  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:20.443826  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:20.620328  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:20.690488  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:20.755250  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:20.755942  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:21.119891  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:21.190982  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:21.255407  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:21.255842  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:21.620175  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:21.690600  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:21.755382  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:21.756836  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:22.119501  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:22.191844  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:22.255316  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:22.256367  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:22.444925  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:22.619718  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:22.690635  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:22.755811  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:22.756561  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:23.119705  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:23.192162  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:23.255519  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:23.255872  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:23.620757  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:23.691758  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:23.755520  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:23.756427  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:24.120513  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:24.192086  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:24.256295  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:24.256686  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:24.620116  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:24.691382  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:24.755043  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:24.755951  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:24.944376  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:25.120456  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:25.191982  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:25.255098  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:25.255804  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:25.619905  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:25.691142  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:25.754831  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:25.755802  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:26.119571  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:26.196506  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:26.255701  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:26.255919  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:26.620804  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:26.691121  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:26.758966  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:26.760279  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:26.944889  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:27.120045  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:27.191052  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:27.255586  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:27.256503  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:27.620151  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:27.691250  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:27.755085  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:27.755974  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:28.120761  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:28.191820  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:28.257187  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:28.258995  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:28.620227  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:28.691090  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:28.755198  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:28.755981  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:29.120457  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:29.191919  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:29.254631  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:29.255456  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:29.444477  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:29.620919  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:29.690895  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:29.755819  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:29.756584  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:30.121124  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:30.190834  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:30.256047  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:30.256366  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:30.620035  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:30.691509  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:30.755191  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:30.756316  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:31.120532  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:31.191673  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:31.255118  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:31.256003  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:31.620057  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:31.691374  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:31.754961  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:31.755839  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:31.944435  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:32.120072  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:32.192527  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:32.255012  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:32.255833  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:32.619735  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:32.691148  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:32.754877  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:32.755735  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:33.119787  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:33.193216  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:33.254954  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:33.256083  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:33.620438  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:33.691119  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:33.755672  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:33.756450  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:33.944655  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:34.120692  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:34.192034  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:34.255673  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:34.256037  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:34.619497  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:34.690310  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:34.755281  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:34.756852  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:35.120378  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:35.191733  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:35.254498  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:35.255387  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:35.619412  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:35.691028  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:35.755504  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:35.756209  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:36.120483  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:36.191192  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:36.255967  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:36.256182  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:36.444124  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:36.620053  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:36.721319  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:36.756145  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:36.756893  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:37.119905  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:37.191784  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:37.254501  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:37.255324  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:37.628685  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:37.693437  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:37.755728  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:37.756994  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:38.120487  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:38.191546  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:38.255676  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:38.256728  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:38.444446  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:38.620386  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:38.690692  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:38.755738  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:38.756669  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:39.119804  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:39.190963  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:39.254542  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:39.255341  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:39.620305  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:39.691312  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:39.755717  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:39.755914  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:40.120417  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:40.191998  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:40.256167  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:40.256688  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:40.620044  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:40.691277  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:40.754201  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:40.755358  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:40.944035  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:41.120077  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:41.192620  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:41.257544  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:41.258405  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:41.619956  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:41.691181  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:41.755424  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:41.756305  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:42.121511  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:42.193451  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:42.256214  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:42.257190  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:42.619714  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:42.691138  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:42.755621  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:42.755666  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:43.120030  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:43.192505  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:43.255509  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:43.255572  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:43.445032  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:43.620043  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:43.690901  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:43.755257  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:43.755821  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:44.120394  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:44.192294  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:44.254643  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:44.256031  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:44.620087  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:44.691543  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:44.755815  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:44.756615  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:45.123135  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:45.197595  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:45.256190  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:45.257001  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:45.445510  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:45.619979  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:45.690952  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:45.760639  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:45.803827  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:45.954763  897600 node_ready.go:49] node "addons-952725" has status "Ready":"True"
	I1007 10:35:45.954837  897600 node_ready.go:38] duration metric: took 40.014076963s for node "addons-952725" to be "Ready" ...
	I1007 10:35:45.954862  897600 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:35:45.967201  897600 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6b52m" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:46.131084  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:46.267250  897600 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 10:35:46.267324  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:46.310010  897600 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 10:35:46.310163  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:46.310414  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:46.623664  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:46.728968  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:46.808033  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:46.808162  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:46.974217  897600 pod_ready.go:93] pod "coredns-7c65d6cfc9-6b52m" in "kube-system" namespace has status "Ready":"True"
	I1007 10:35:46.974238  897600 pod_ready.go:82] duration metric: took 1.006959941s for pod "coredns-7c65d6cfc9-6b52m" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:46.974261  897600 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-952725" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:46.979966  897600 pod_ready.go:93] pod "etcd-addons-952725" in "kube-system" namespace has status "Ready":"True"
	I1007 10:35:46.980036  897600 pod_ready.go:82] duration metric: took 5.766704ms for pod "etcd-addons-952725" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:46.980073  897600 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-952725" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:46.988435  897600 pod_ready.go:93] pod "kube-apiserver-addons-952725" in "kube-system" namespace has status "Ready":"True"
	I1007 10:35:46.988505  897600 pod_ready.go:82] duration metric: took 8.409562ms for pod "kube-apiserver-addons-952725" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:46.988531  897600 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-952725" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:46.995460  897600 pod_ready.go:93] pod "kube-controller-manager-addons-952725" in "kube-system" namespace has status "Ready":"True"
	I1007 10:35:46.995530  897600 pod_ready.go:82] duration metric: took 6.970321ms for pod "kube-controller-manager-addons-952725" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:46.995558  897600 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9dvhw" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:47.130323  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:47.146870  897600 pod_ready.go:93] pod "kube-proxy-9dvhw" in "kube-system" namespace has status "Ready":"True"
	I1007 10:35:47.146942  897600 pod_ready.go:82] duration metric: took 151.362352ms for pod "kube-proxy-9dvhw" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:47.146972  897600 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-952725" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:47.226598  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:47.255699  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:47.256410  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:47.550393  897600 pod_ready.go:93] pod "kube-scheduler-addons-952725" in "kube-system" namespace has status "Ready":"True"
	I1007 10:35:47.550421  897600 pod_ready.go:82] duration metric: took 403.42634ms for pod "kube-scheduler-addons-952725" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:47.550433  897600 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:47.620092  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:47.722085  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:47.756130  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:47.757088  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:48.121233  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:48.197219  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:48.255986  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:48.258019  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:48.620846  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:48.723760  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:48.757851  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:48.758871  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:49.125829  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:49.195058  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:49.259063  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:49.260493  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:49.562490  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:35:49.624943  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:49.694868  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:49.758809  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:49.760521  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:50.121387  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:50.195276  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:50.259359  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:50.261650  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:50.623481  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:50.695172  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:50.758130  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:50.759920  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:51.135736  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:51.201723  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:51.258129  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:51.260394  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:51.623263  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:51.695999  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:51.761490  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:51.763003  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:52.056970  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:35:52.121263  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:52.222768  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:52.256069  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:52.256979  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:52.620464  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:52.692213  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:52.756741  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:52.757107  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:53.120617  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:53.193206  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:53.256466  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:53.257114  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:53.621091  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:53.692955  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:53.756172  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:53.757086  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:54.058047  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:35:54.119905  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:54.192789  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:54.257499  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:54.257803  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:54.620491  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:54.693275  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:54.763293  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:54.764528  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:55.121439  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:55.193547  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:55.260130  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:55.261469  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:55.620683  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:55.692986  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:55.756887  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:55.757385  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:56.121169  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:56.193777  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:56.257642  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:56.258400  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:56.557330  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:35:56.622515  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:56.692997  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:56.756053  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:56.756816  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:57.120759  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:57.194163  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:57.255667  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:57.256426  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:57.621459  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:57.697848  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:57.778608  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:57.779360  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:58.120365  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:58.193773  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:58.257015  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:58.257978  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:58.620988  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:58.693248  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:58.756341  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:58.757601  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:59.057328  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:35:59.122386  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:59.194061  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:59.257078  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:59.257639  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:59.625474  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:59.696349  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:59.758227  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:59.760817  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:00.138236  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:00.266427  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:00.373549  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:00.393821  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:00.621528  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:00.693551  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:00.757398  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:00.758832  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:01.121497  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:01.223837  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:01.256097  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:01.257046  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:01.557170  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:01.620146  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:01.692315  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:01.756457  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:01.757384  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:02.120530  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:02.192537  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:02.256441  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:02.257205  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:02.620575  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:02.691966  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:02.757661  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:02.759096  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:03.120833  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:03.195054  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:03.257723  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:03.259916  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:03.559281  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:03.630899  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:03.693054  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:03.756939  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:03.758589  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:04.120217  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:04.191809  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:04.258440  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:04.259250  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:04.621066  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:04.692994  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:04.757813  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:04.758793  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:05.123837  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:05.192997  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:05.257190  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:05.258719  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:05.620429  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:05.692091  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:05.756781  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:05.759620  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:06.061233  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:06.125816  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:06.223764  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:06.256890  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:06.257100  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:06.621286  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:06.694035  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:06.755722  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:06.757125  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:07.121276  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:07.222844  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:07.256796  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:07.257516  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:07.621750  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:07.692311  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:07.755698  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:07.758308  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:08.120124  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:08.193283  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:08.256637  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:08.257428  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:08.602063  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:08.626477  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:08.692923  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:08.757057  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:08.759065  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:09.120448  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:09.203017  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:09.257440  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:09.258354  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:09.620946  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:09.691975  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:09.756334  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:09.756623  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:10.120732  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:10.204811  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:10.255966  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:10.256934  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:10.619963  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:10.692649  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:10.755930  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:10.756716  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:11.057897  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:11.120671  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:11.203726  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:11.265477  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:11.266801  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:11.620344  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:11.695704  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:11.756591  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:11.758608  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:12.120970  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:12.193725  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:12.256133  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:12.257249  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:12.620551  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:12.691560  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:12.756792  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:12.757868  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:13.120907  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:13.192492  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:13.256494  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:13.256940  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:13.560170  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:13.622678  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:13.692825  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:13.756668  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:13.757685  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:14.120019  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:14.194422  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:14.294068  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:14.295874  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:14.620758  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:14.692984  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:14.754863  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:14.756559  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:15.121932  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:15.221910  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:15.256819  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:15.257999  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:15.620632  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:15.693528  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:15.756480  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:15.756807  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:16.057207  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:16.123437  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:16.233871  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:16.281986  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:16.282520  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:16.621043  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:16.692619  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:16.755211  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:16.756172  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:17.120299  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:17.192047  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:17.257387  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:17.261722  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:17.621312  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:17.692848  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:17.757740  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:17.759588  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:18.059632  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:18.120437  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:18.193952  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:18.297191  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:18.298108  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:18.620592  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:18.691873  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:18.756605  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:18.758236  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:19.121822  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:19.222786  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:19.257853  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:19.259401  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:19.622863  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:19.692777  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:19.757722  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:19.757855  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:20.059736  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:20.120415  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:20.192636  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:20.258119  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:20.258300  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:20.620425  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:20.692121  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:20.757948  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:20.758510  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:21.121078  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:21.196990  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:21.262762  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:21.263812  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:21.620998  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:21.695405  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:21.759154  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:21.761213  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:22.121238  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:22.193002  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:22.255903  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:22.257985  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:22.557532  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:22.619854  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:22.691825  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:22.755977  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:22.757311  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:23.120045  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:23.192714  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:23.256618  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:23.257867  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:23.621784  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:23.692493  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:23.755754  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:23.756013  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:24.120997  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:24.199341  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:24.261855  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:24.263649  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:24.567589  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:24.620347  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:24.693034  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:24.758553  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:24.759823  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:25.121284  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:25.192482  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:25.258401  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:25.259288  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:25.620881  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:25.693052  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:25.758665  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:25.759881  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:26.122172  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:26.193352  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:26.256373  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:26.257010  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:26.620981  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:26.692424  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:26.760691  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:26.762090  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:27.065897  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:27.120699  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:27.200876  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:27.256494  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:27.258942  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:27.621022  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:27.693005  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:27.756608  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:27.758618  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:28.124027  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:28.227304  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:28.261286  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:28.262057  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:28.622231  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:28.692944  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:28.757021  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:28.759390  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:29.121289  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:29.192304  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:29.258422  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:29.259892  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:29.557547  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:29.620863  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:29.692586  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:29.758755  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:29.759898  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:30.122227  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:30.194325  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:30.268582  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:30.269842  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:30.620438  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:30.692514  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:30.758090  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:30.760515  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:31.120944  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:31.191971  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:31.256391  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:31.257454  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:31.620766  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:31.691829  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:31.756102  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:31.757298  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:32.056766  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:32.120556  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:32.192341  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:32.255792  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:32.258022  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:32.620006  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:32.693405  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:32.767048  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:32.770238  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:33.120954  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:33.193114  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:33.260266  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:33.262668  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:33.621503  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:33.698675  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:33.760149  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:33.761745  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:34.059139  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:34.120283  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:34.192365  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:34.261454  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:34.262491  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:34.620004  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:34.692101  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:34.762763  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:34.763412  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:35.121079  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:35.192159  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:35.256595  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:35.257051  897600 kapi.go:107] duration metric: took 1m26.505736633s to wait for kubernetes.io/minikube-addons=registry ...
	I1007 10:36:35.621537  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:35.693264  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:35.757410  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:36.061799  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:36.122155  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:36.223909  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:36.256432  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:36.627260  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:36.692268  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:36.756392  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:37.127777  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:37.231344  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:37.256480  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:37.621259  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:37.692200  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:37.757320  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:38.120381  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:38.192366  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:38.256155  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:38.558309  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:38.621085  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:38.692908  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:38.755860  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:39.120908  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:39.192235  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:39.255772  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:39.620405  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:39.692368  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:39.755794  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:40.120388  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:40.193016  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:40.258038  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:40.567757  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:40.621083  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:40.722495  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:40.755723  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:41.120827  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:41.192100  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:41.256769  897600 kapi.go:107] duration metric: took 1m32.505411601s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1007 10:36:41.620061  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:41.692679  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:42.134403  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:42.201481  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:42.624335  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:42.725782  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:43.056878  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:43.120323  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:43.192468  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:43.620202  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:43.693539  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:44.120162  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:44.191589  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:44.627461  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:44.691876  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:45.120394  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:45.125783  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:45.193867  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:45.620955  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:45.692911  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:46.120287  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:46.194226  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:46.620153  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:46.722476  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:47.120366  897600 kapi.go:107] duration metric: took 1m34.003877783s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1007 10:36:47.122614  897600 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-952725 cluster.
	I1007 10:36:47.124380  897600 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1007 10:36:47.126016  897600 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1007 10:36:47.192217  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:47.556277  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:47.692900  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:48.195047  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:48.693120  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:49.193204  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:49.558005  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:49.693026  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:50.192314  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:50.692782  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:51.192448  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:51.692991  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:52.059957  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:52.195170  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:52.693081  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:53.192988  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:53.692325  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:54.192334  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:54.562255  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:54.693068  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:55.193485  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:55.691776  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:56.219672  897600 kapi.go:107] duration metric: took 1m47.032405778s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1007 10:36:56.221946  897600 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1007 10:36:56.223744  897600 addons.go:510] duration metric: took 1m53.697824035s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1007 10:36:56.556979  897600 pod_ready.go:93] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"True"
	I1007 10:36:56.557006  897600 pod_ready.go:82] duration metric: took 1m9.006564427s for pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace to be "Ready" ...
	I1007 10:36:56.557018  897600 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fmzt5" in "kube-system" namespace to be "Ready" ...
	I1007 10:36:56.562379  897600 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-fmzt5" in "kube-system" namespace has status "Ready":"True"
	I1007 10:36:56.562402  897600 pod_ready.go:82] duration metric: took 5.376408ms for pod "nvidia-device-plugin-daemonset-fmzt5" in "kube-system" namespace to be "Ready" ...
	I1007 10:36:56.562421  897600 pod_ready.go:39] duration metric: took 1m10.607532851s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:36:56.562436  897600 api_server.go:52] waiting for apiserver process to appear ...
	I1007 10:36:56.562471  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 10:36:56.562536  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 10:36:56.650122  897600 cri.go:89] found id: "a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a"
	I1007 10:36:56.650146  897600 cri.go:89] found id: ""
	I1007 10:36:56.650155  897600 logs.go:282] 1 containers: [a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a]
	I1007 10:36:56.650212  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:36:56.654573  897600 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 10:36:56.654650  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 10:36:56.698951  897600 cri.go:89] found id: "db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e"
	I1007 10:36:56.698974  897600 cri.go:89] found id: ""
	I1007 10:36:56.698982  897600 logs.go:282] 1 containers: [db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e]
	I1007 10:36:56.699037  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:36:56.702707  897600 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 10:36:56.702791  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 10:36:56.743359  897600 cri.go:89] found id: "dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264"
	I1007 10:36:56.743385  897600 cri.go:89] found id: ""
	I1007 10:36:56.743395  897600 logs.go:282] 1 containers: [dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264]
	I1007 10:36:56.743453  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:36:56.748039  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 10:36:56.748118  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 10:36:56.793981  897600 cri.go:89] found id: "21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e"
	I1007 10:36:56.794003  897600 cri.go:89] found id: ""
	I1007 10:36:56.794011  897600 logs.go:282] 1 containers: [21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e]
	I1007 10:36:56.794071  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:36:56.797745  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 10:36:56.797829  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 10:36:56.836301  897600 cri.go:89] found id: "9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354"
	I1007 10:36:56.836326  897600 cri.go:89] found id: ""
	I1007 10:36:56.836335  897600 logs.go:282] 1 containers: [9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354]
	I1007 10:36:56.836396  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:36:56.839818  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 10:36:56.839893  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 10:36:56.877706  897600 cri.go:89] found id: "0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0"
	I1007 10:36:56.877729  897600 cri.go:89] found id: ""
	I1007 10:36:56.877738  897600 logs.go:282] 1 containers: [0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0]
	I1007 10:36:56.877815  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:36:56.881381  897600 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 10:36:56.881468  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 10:36:56.921067  897600 cri.go:89] found id: "a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273"
	I1007 10:36:56.921089  897600 cri.go:89] found id: ""
	I1007 10:36:56.921098  897600 logs.go:282] 1 containers: [a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273]
	I1007 10:36:56.921154  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:36:56.924603  897600 logs.go:123] Gathering logs for kube-apiserver [a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a] ...
	I1007 10:36:56.924630  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a"
	I1007 10:36:57.006047  897600 logs.go:123] Gathering logs for coredns [dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264] ...
	I1007 10:36:57.006096  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264"
	I1007 10:36:57.057018  897600 logs.go:123] Gathering logs for describe nodes ...
	I1007 10:36:57.057056  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 10:36:57.249707  897600 logs.go:123] Gathering logs for dmesg ...
	I1007 10:36:57.249739  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 10:36:57.268267  897600 logs.go:123] Gathering logs for etcd [db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e] ...
	I1007 10:36:57.268298  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e"
	I1007 10:36:57.328214  897600 logs.go:123] Gathering logs for kube-scheduler [21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e] ...
	I1007 10:36:57.328421  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e"
	I1007 10:36:57.397626  897600 logs.go:123] Gathering logs for kube-proxy [9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354] ...
	I1007 10:36:57.397660  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354"
	I1007 10:36:57.462539  897600 logs.go:123] Gathering logs for kube-controller-manager [0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0] ...
	I1007 10:36:57.462568  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0"
	I1007 10:36:57.541360  897600 logs.go:123] Gathering logs for kindnet [a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273] ...
	I1007 10:36:57.541401  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273"
	I1007 10:36:57.581279  897600 logs.go:123] Gathering logs for CRI-O ...
	I1007 10:36:57.581305  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 10:36:57.680992  897600 logs.go:123] Gathering logs for kubelet ...
	I1007 10:36:57.681032  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 10:36:57.752399  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.866281    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-952725" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-952725' and this object
	W1007 10:36:57.752648  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866332    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:36:57.752822  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.866470    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-952725' and this object
	W1007 10:36:57.753041  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866491    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:36:57.753212  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.899091    1502 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-952725' and this object
	W1007 10:36:57.753417  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.899139    1502 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	I1007 10:36:57.789307  897600 logs.go:123] Gathering logs for container status ...
	I1007 10:36:57.789337  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 10:36:57.863682  897600 out.go:358] Setting ErrFile to fd 2...
	I1007 10:36:57.863710  897600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 10:36:57.863788  897600 out.go:270] X Problems detected in kubelet:
	W1007 10:36:57.863847  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866332    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:36:57.863864  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.866470    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-952725' and this object
	W1007 10:36:57.863891  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866491    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:36:57.863899  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.899091    1502 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-952725' and this object
	W1007 10:36:57.863907  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.899139    1502 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	I1007 10:36:57.863915  897600 out.go:358] Setting ErrFile to fd 2...
	I1007 10:36:57.863924  897600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:37:07.864673  897600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 10:37:07.880027  897600 api_server.go:72] duration metric: took 2m5.354347398s to wait for apiserver process to appear ...
	I1007 10:37:07.880057  897600 api_server.go:88] waiting for apiserver healthz status ...
	I1007 10:37:07.880097  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 10:37:07.880167  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 10:37:07.927054  897600 cri.go:89] found id: "a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a"
	I1007 10:37:07.927079  897600 cri.go:89] found id: ""
	I1007 10:37:07.927089  897600 logs.go:282] 1 containers: [a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a]
	I1007 10:37:07.927147  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:07.930899  897600 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 10:37:07.930979  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 10:37:07.975788  897600 cri.go:89] found id: "db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e"
	I1007 10:37:07.975814  897600 cri.go:89] found id: ""
	I1007 10:37:07.975824  897600 logs.go:282] 1 containers: [db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e]
	I1007 10:37:07.975881  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:07.979490  897600 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 10:37:07.979565  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 10:37:08.022892  897600 cri.go:89] found id: "dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264"
	I1007 10:37:08.022919  897600 cri.go:89] found id: ""
	I1007 10:37:08.022928  897600 logs.go:282] 1 containers: [dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264]
	I1007 10:37:08.022996  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:08.027989  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 10:37:08.028070  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 10:37:08.072565  897600 cri.go:89] found id: "21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e"
	I1007 10:37:08.072590  897600 cri.go:89] found id: ""
	I1007 10:37:08.072600  897600 logs.go:282] 1 containers: [21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e]
	I1007 10:37:08.072666  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:08.076407  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 10:37:08.076484  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 10:37:08.118844  897600 cri.go:89] found id: "9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354"
	I1007 10:37:08.118868  897600 cri.go:89] found id: ""
	I1007 10:37:08.118876  897600 logs.go:282] 1 containers: [9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354]
	I1007 10:37:08.118933  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:08.122839  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 10:37:08.122914  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 10:37:08.162329  897600 cri.go:89] found id: "0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0"
	I1007 10:37:08.162353  897600 cri.go:89] found id: ""
	I1007 10:37:08.162362  897600 logs.go:282] 1 containers: [0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0]
	I1007 10:37:08.162423  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:08.165980  897600 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 10:37:08.166049  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 10:37:08.203439  897600 cri.go:89] found id: "a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273"
	I1007 10:37:08.203460  897600 cri.go:89] found id: ""
	I1007 10:37:08.203469  897600 logs.go:282] 1 containers: [a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273]
	I1007 10:37:08.203528  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:08.207017  897600 logs.go:123] Gathering logs for kubelet ...
	I1007 10:37:08.207043  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 10:37:08.271560  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.866281    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-952725" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-952725' and this object
	W1007 10:37:08.271835  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866332    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:37:08.272016  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.866470    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-952725' and this object
	W1007 10:37:08.272227  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866491    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:37:08.272448  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.899091    1502 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-952725' and this object
	W1007 10:37:08.272656  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.899139    1502 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	I1007 10:37:08.309193  897600 logs.go:123] Gathering logs for kube-scheduler [21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e] ...
	I1007 10:37:08.309229  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e"
	I1007 10:37:08.363518  897600 logs.go:123] Gathering logs for kube-proxy [9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354] ...
	I1007 10:37:08.363549  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354"
	I1007 10:37:08.406239  897600 logs.go:123] Gathering logs for CRI-O ...
	I1007 10:37:08.406265  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 10:37:08.509108  897600 logs.go:123] Gathering logs for container status ...
	I1007 10:37:08.509186  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 10:37:08.611997  897600 logs.go:123] Gathering logs for dmesg ...
	I1007 10:37:08.612034  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 10:37:08.628856  897600 logs.go:123] Gathering logs for describe nodes ...
	I1007 10:37:08.628887  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 10:37:08.763044  897600 logs.go:123] Gathering logs for kube-apiserver [a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a] ...
	I1007 10:37:08.763078  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a"
	I1007 10:37:08.816725  897600 logs.go:123] Gathering logs for etcd [db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e] ...
	I1007 10:37:08.816803  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e"
	I1007 10:37:08.874013  897600 logs.go:123] Gathering logs for coredns [dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264] ...
	I1007 10:37:08.874048  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264"
	I1007 10:37:08.914165  897600 logs.go:123] Gathering logs for kube-controller-manager [0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0] ...
	I1007 10:37:08.914197  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0"
	I1007 10:37:09.007499  897600 logs.go:123] Gathering logs for kindnet [a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273] ...
	I1007 10:37:09.007549  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273"
	I1007 10:37:09.053760  897600 out.go:358] Setting ErrFile to fd 2...
	I1007 10:37:09.053792  897600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 10:37:09.053847  897600 out.go:270] X Problems detected in kubelet:
	W1007 10:37:09.053861  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866332    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:37:09.053868  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.866470    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-952725' and this object
	W1007 10:37:09.053893  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866491    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:37:09.053906  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.899091    1502 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-952725' and this object
	W1007 10:37:09.053912  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.899139    1502 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	I1007 10:37:09.053918  897600 out.go:358] Setting ErrFile to fd 2...
	I1007 10:37:09.053931  897600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:37:19.056068  897600 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 10:37:19.064312  897600 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1007 10:37:19.065402  897600 api_server.go:141] control plane version: v1.31.1
	I1007 10:37:19.065427  897600 api_server.go:131] duration metric: took 11.185361694s to wait for apiserver health ...
	I1007 10:37:19.065448  897600 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 10:37:19.065471  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 10:37:19.065544  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 10:37:19.113004  897600 cri.go:89] found id: "a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a"
	I1007 10:37:19.113025  897600 cri.go:89] found id: ""
	I1007 10:37:19.113033  897600 logs.go:282] 1 containers: [a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a]
	I1007 10:37:19.113088  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:19.116801  897600 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 10:37:19.116880  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 10:37:19.159114  897600 cri.go:89] found id: "db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e"
	I1007 10:37:19.159138  897600 cri.go:89] found id: ""
	I1007 10:37:19.159146  897600 logs.go:282] 1 containers: [db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e]
	I1007 10:37:19.159208  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:19.162554  897600 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 10:37:19.162657  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 10:37:19.200865  897600 cri.go:89] found id: "dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264"
	I1007 10:37:19.200890  897600 cri.go:89] found id: ""
	I1007 10:37:19.200899  897600 logs.go:282] 1 containers: [dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264]
	I1007 10:37:19.200980  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:19.204646  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 10:37:19.204757  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 10:37:19.244869  897600 cri.go:89] found id: "21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e"
	I1007 10:37:19.244943  897600 cri.go:89] found id: ""
	I1007 10:37:19.244958  897600 logs.go:282] 1 containers: [21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e]
	I1007 10:37:19.245032  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:19.248688  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 10:37:19.248788  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 10:37:19.289715  897600 cri.go:89] found id: "9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354"
	I1007 10:37:19.289786  897600 cri.go:89] found id: ""
	I1007 10:37:19.289810  897600 logs.go:282] 1 containers: [9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354]
	I1007 10:37:19.289888  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:19.293553  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 10:37:19.293626  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 10:37:19.332066  897600 cri.go:89] found id: "0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0"
	I1007 10:37:19.332135  897600 cri.go:89] found id: ""
	I1007 10:37:19.332146  897600 logs.go:282] 1 containers: [0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0]
	I1007 10:37:19.332237  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:19.335650  897600 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 10:37:19.335762  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 10:37:19.387104  897600 cri.go:89] found id: "a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273"
	I1007 10:37:19.387127  897600 cri.go:89] found id: ""
	I1007 10:37:19.387135  897600 logs.go:282] 1 containers: [a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273]
	I1007 10:37:19.387204  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:19.390737  897600 logs.go:123] Gathering logs for coredns [dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264] ...
	I1007 10:37:19.390798  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264"
	I1007 10:37:19.436352  897600 logs.go:123] Gathering logs for kube-controller-manager [0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0] ...
	I1007 10:37:19.436445  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0"
	I1007 10:37:19.509907  897600 logs.go:123] Gathering logs for container status ...
	I1007 10:37:19.509946  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 10:37:19.566727  897600 logs.go:123] Gathering logs for kube-scheduler [21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e] ...
	I1007 10:37:19.566760  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e"
	I1007 10:37:19.613328  897600 logs.go:123] Gathering logs for kube-proxy [9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354] ...
	I1007 10:37:19.613356  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354"
	I1007 10:37:19.650758  897600 logs.go:123] Gathering logs for kindnet [a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273] ...
	I1007 10:37:19.650786  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273"
	I1007 10:37:19.692504  897600 logs.go:123] Gathering logs for kubelet ...
	I1007 10:37:19.692574  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 10:37:19.766253  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.866281    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-952725" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-952725' and this object
	W1007 10:37:19.766497  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866332    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:37:19.766672  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.866470    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-952725' and this object
	W1007 10:37:19.766884  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866491    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:37:19.767051  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.899091    1502 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-952725' and this object
	W1007 10:37:19.767255  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.899139    1502 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	I1007 10:37:19.804156  897600 logs.go:123] Gathering logs for dmesg ...
	I1007 10:37:19.804186  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 10:37:19.821325  897600 logs.go:123] Gathering logs for describe nodes ...
	I1007 10:37:19.821353  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 10:37:19.956686  897600 logs.go:123] Gathering logs for kube-apiserver [a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a] ...
	I1007 10:37:19.956717  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a"
	I1007 10:37:20.021629  897600 logs.go:123] Gathering logs for etcd [db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e] ...
	I1007 10:37:20.021676  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e"
	I1007 10:37:20.070025  897600 logs.go:123] Gathering logs for CRI-O ...
	I1007 10:37:20.070065  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 10:37:20.171276  897600 out.go:358] Setting ErrFile to fd 2...
	I1007 10:37:20.171311  897600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 10:37:20.171372  897600 out.go:270] X Problems detected in kubelet:
	W1007 10:37:20.171385  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866332    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:37:20.171400  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.866470    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-952725' and this object
	W1007 10:37:20.171408  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866491    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:37:20.171420  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.899091    1502 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-952725' and this object
	W1007 10:37:20.171427  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.899139    1502 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	I1007 10:37:20.171435  897600 out.go:358] Setting ErrFile to fd 2...
	I1007 10:37:20.171446  897600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:37:30.183789  897600 system_pods.go:59] 18 kube-system pods found
	I1007 10:37:30.183838  897600 system_pods.go:61] "coredns-7c65d6cfc9-6b52m" [ed7fc72d-beca-46bb-b394-215642359254] Running
	I1007 10:37:30.183847  897600 system_pods.go:61] "csi-hostpath-attacher-0" [b45f53e6-4940-4b1c-ae02-70ddf7230f58] Running
	I1007 10:37:30.183853  897600 system_pods.go:61] "csi-hostpath-resizer-0" [d17c3c07-b028-47ca-9ba4-df73aa7b84af] Running
	I1007 10:37:30.183857  897600 system_pods.go:61] "csi-hostpathplugin-rddzj" [9e9d3c3a-bf48-4754-9099-da157ca7adc5] Running
	I1007 10:37:30.183861  897600 system_pods.go:61] "etcd-addons-952725" [e7d6ae0d-33ca-4dc4-adb0-33ccc4d5c466] Running
	I1007 10:37:30.183865  897600 system_pods.go:61] "kindnet-57hzc" [31cd7dce-84b2-4be6-89c3-b529c0945b92] Running
	I1007 10:37:30.183869  897600 system_pods.go:61] "kube-apiserver-addons-952725" [174e4f2a-9fae-42fc-bdbc-7d90072c4d6f] Running
	I1007 10:37:30.183873  897600 system_pods.go:61] "kube-controller-manager-addons-952725" [938c6601-5924-451b-9e0f-418459f50c91] Running
	I1007 10:37:30.183879  897600 system_pods.go:61] "kube-ingress-dns-minikube" [cd331aef-a9f6-4aef-8e5c-4f873434c12f] Running
	I1007 10:37:30.183883  897600 system_pods.go:61] "kube-proxy-9dvhw" [8a3c0df3-9298-4f23-a7ca-9732011063aa] Running
	I1007 10:37:30.183887  897600 system_pods.go:61] "kube-scheduler-addons-952725" [ad1890e4-1d3f-470f-9a9f-e46e135a4547] Running
	I1007 10:37:30.183891  897600 system_pods.go:61] "metrics-server-84c5f94fbc-6vc27" [72e58343-87c1-4934-82b9-e0757b74087f] Running
	I1007 10:37:30.183896  897600 system_pods.go:61] "nvidia-device-plugin-daemonset-fmzt5" [a9637f8d-dccb-461d-97b2-a4f5108a27d6] Running
	I1007 10:37:30.183900  897600 system_pods.go:61] "registry-66c9cd494c-ckskn" [d430f6b1-cd25-4f9f-aa81-282aa63589cf] Running
	I1007 10:37:30.183904  897600 system_pods.go:61] "registry-proxy-lxfj2" [0a453af2-0cb5-4656-8c53-e0132b6c6cfc] Running
	I1007 10:37:30.183909  897600 system_pods.go:61] "snapshot-controller-56fcc65765-6nkvg" [f0a03f47-231c-49a7-995c-fa8932f24be8] Running
	I1007 10:37:30.183914  897600 system_pods.go:61] "snapshot-controller-56fcc65765-c5p5m" [5918748e-2aa0-4ef1-83d2-0de4f14da617] Running
	I1007 10:37:30.183918  897600 system_pods.go:61] "storage-provisioner" [0021a05c-7712-4570-8a15-b7911ca5e125] Running
	I1007 10:37:30.183933  897600 system_pods.go:74] duration metric: took 11.118470081s to wait for pod list to return data ...
	I1007 10:37:30.183942  897600 default_sa.go:34] waiting for default service account to be created ...
	I1007 10:37:30.187277  897600 default_sa.go:45] found service account: "default"
	I1007 10:37:30.187312  897600 default_sa.go:55] duration metric: took 3.3625ms for default service account to be created ...
	I1007 10:37:30.187323  897600 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 10:37:30.198035  897600 system_pods.go:86] 18 kube-system pods found
	I1007 10:37:30.198073  897600 system_pods.go:89] "coredns-7c65d6cfc9-6b52m" [ed7fc72d-beca-46bb-b394-215642359254] Running
	I1007 10:37:30.198084  897600 system_pods.go:89] "csi-hostpath-attacher-0" [b45f53e6-4940-4b1c-ae02-70ddf7230f58] Running
	I1007 10:37:30.198090  897600 system_pods.go:89] "csi-hostpath-resizer-0" [d17c3c07-b028-47ca-9ba4-df73aa7b84af] Running
	I1007 10:37:30.198094  897600 system_pods.go:89] "csi-hostpathplugin-rddzj" [9e9d3c3a-bf48-4754-9099-da157ca7adc5] Running
	I1007 10:37:30.198100  897600 system_pods.go:89] "etcd-addons-952725" [e7d6ae0d-33ca-4dc4-adb0-33ccc4d5c466] Running
	I1007 10:37:30.198107  897600 system_pods.go:89] "kindnet-57hzc" [31cd7dce-84b2-4be6-89c3-b529c0945b92] Running
	I1007 10:37:30.198112  897600 system_pods.go:89] "kube-apiserver-addons-952725" [174e4f2a-9fae-42fc-bdbc-7d90072c4d6f] Running
	I1007 10:37:30.198117  897600 system_pods.go:89] "kube-controller-manager-addons-952725" [938c6601-5924-451b-9e0f-418459f50c91] Running
	I1007 10:37:30.198124  897600 system_pods.go:89] "kube-ingress-dns-minikube" [cd331aef-a9f6-4aef-8e5c-4f873434c12f] Running
	I1007 10:37:30.198129  897600 system_pods.go:89] "kube-proxy-9dvhw" [8a3c0df3-9298-4f23-a7ca-9732011063aa] Running
	I1007 10:37:30.198134  897600 system_pods.go:89] "kube-scheduler-addons-952725" [ad1890e4-1d3f-470f-9a9f-e46e135a4547] Running
	I1007 10:37:30.198147  897600 system_pods.go:89] "metrics-server-84c5f94fbc-6vc27" [72e58343-87c1-4934-82b9-e0757b74087f] Running
	I1007 10:37:30.198152  897600 system_pods.go:89] "nvidia-device-plugin-daemonset-fmzt5" [a9637f8d-dccb-461d-97b2-a4f5108a27d6] Running
	I1007 10:37:30.198156  897600 system_pods.go:89] "registry-66c9cd494c-ckskn" [d430f6b1-cd25-4f9f-aa81-282aa63589cf] Running
	I1007 10:37:30.198163  897600 system_pods.go:89] "registry-proxy-lxfj2" [0a453af2-0cb5-4656-8c53-e0132b6c6cfc] Running
	I1007 10:37:30.198168  897600 system_pods.go:89] "snapshot-controller-56fcc65765-6nkvg" [f0a03f47-231c-49a7-995c-fa8932f24be8] Running
	I1007 10:37:30.198172  897600 system_pods.go:89] "snapshot-controller-56fcc65765-c5p5m" [5918748e-2aa0-4ef1-83d2-0de4f14da617] Running
	I1007 10:37:30.198178  897600 system_pods.go:89] "storage-provisioner" [0021a05c-7712-4570-8a15-b7911ca5e125] Running
	I1007 10:37:30.198187  897600 system_pods.go:126] duration metric: took 10.856782ms to wait for k8s-apps to be running ...
	I1007 10:37:30.198198  897600 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 10:37:30.198258  897600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 10:37:30.213256  897600 system_svc.go:56] duration metric: took 15.047814ms WaitForService to wait for kubelet
	I1007 10:37:30.213288  897600 kubeadm.go:582] duration metric: took 2m27.687613381s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:37:30.213310  897600 node_conditions.go:102] verifying NodePressure condition ...
	I1007 10:37:30.217245  897600 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 10:37:30.217284  897600 node_conditions.go:123] node cpu capacity is 2
	I1007 10:37:30.217297  897600 node_conditions.go:105] duration metric: took 3.980603ms to run NodePressure ...
	I1007 10:37:30.217309  897600 start.go:241] waiting for startup goroutines ...
	I1007 10:37:30.217318  897600 start.go:246] waiting for cluster config update ...
	I1007 10:37:30.217334  897600 start.go:255] writing updated cluster config ...
	I1007 10:37:30.217649  897600 ssh_runner.go:195] Run: rm -f paused
	I1007 10:37:30.567611  897600 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 10:37:30.569884  897600 out.go:177] * Done! kubectl is now configured to use "addons-952725" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 10:48:34 addons-952725 crio[959]: time="2024-10-07 10:48:34.299320997Z" level=info msg="Started container" PID=13700 containerID=f810b25076c6373f4f05874eb963d3fdc9ca7759eea5b21470e5025e0b9755cd description=default/busybox/busybox id=7237bf99-1a7d-4e99-98d5-5493741a6733 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0c88881f09adab07ab741fc3dcd5c665951e770258782f4528dcf6928fc2a239
	Oct 07 10:49:34 addons-952725 crio[959]: time="2024-10-07 10:49:34.772538404Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-m2tq6/POD" id=9ebff45e-5a09-4414-9efb-9d34931e3d75 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 07 10:49:34 addons-952725 crio[959]: time="2024-10-07 10:49:34.772604709Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 07 10:49:34 addons-952725 crio[959]: time="2024-10-07 10:49:34.829057192Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-m2tq6 Namespace:default ID:f923962a6ac46a50bca7b2cac196a5f56783a1ce073edb7a522e5d92734319ab UID:2ff43eae-491b-4ce4-823c-b7078bbb835a NetNS:/var/run/netns/89052004-2016-47c1-b6c6-36389dee900f Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 07 10:49:34 addons-952725 crio[959]: time="2024-10-07 10:49:34.829109746Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-m2tq6 to CNI network \"kindnet\" (type=ptp)"
	Oct 07 10:49:34 addons-952725 crio[959]: time="2024-10-07 10:49:34.843227451Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-m2tq6 Namespace:default ID:f923962a6ac46a50bca7b2cac196a5f56783a1ce073edb7a522e5d92734319ab UID:2ff43eae-491b-4ce4-823c-b7078bbb835a NetNS:/var/run/netns/89052004-2016-47c1-b6c6-36389dee900f Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 07 10:49:34 addons-952725 crio[959]: time="2024-10-07 10:49:34.843404828Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-m2tq6 for CNI network kindnet (type=ptp)"
	Oct 07 10:49:34 addons-952725 crio[959]: time="2024-10-07 10:49:34.848207356Z" level=info msg="Ran pod sandbox f923962a6ac46a50bca7b2cac196a5f56783a1ce073edb7a522e5d92734319ab with infra container: default/hello-world-app-55bf9c44b4-m2tq6/POD" id=9ebff45e-5a09-4414-9efb-9d34931e3d75 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 07 10:49:34 addons-952725 crio[959]: time="2024-10-07 10:49:34.851845223Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=34e2324c-ac23-4316-80bb-78b0ce845c66 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 10:49:34 addons-952725 crio[959]: time="2024-10-07 10:49:34.852088503Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=34e2324c-ac23-4316-80bb-78b0ce845c66 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 10:49:34 addons-952725 crio[959]: time="2024-10-07 10:49:34.855338770Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=14bea15d-07fa-457c-bebc-34b4ddb4d1b7 name=/runtime.v1.ImageService/PullImage
	Oct 07 10:49:34 addons-952725 crio[959]: time="2024-10-07 10:49:34.859486607Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 07 10:49:35 addons-952725 crio[959]: time="2024-10-07 10:49:35.141945503Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 07 10:49:35 addons-952725 crio[959]: time="2024-10-07 10:49:35.797693687Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=14bea15d-07fa-457c-bebc-34b4ddb4d1b7 name=/runtime.v1.ImageService/PullImage
	Oct 07 10:49:35 addons-952725 crio[959]: time="2024-10-07 10:49:35.799109505Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9c1762a7-d36d-4c92-876a-39cb442885c5 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 10:49:35 addons-952725 crio[959]: time="2024-10-07 10:49:35.799838748Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9c1762a7-d36d-4c92-876a-39cb442885c5 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 10:49:35 addons-952725 crio[959]: time="2024-10-07 10:49:35.800688137Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=bf458623-5762-44f1-ac17-826e01198166 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 10:49:35 addons-952725 crio[959]: time="2024-10-07 10:49:35.801330528Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=bf458623-5762-44f1-ac17-826e01198166 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 10:49:35 addons-952725 crio[959]: time="2024-10-07 10:49:35.802741579Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-m2tq6/hello-world-app" id=70d441bb-4d72-48b3-a98d-7d76872f8d8d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 07 10:49:35 addons-952725 crio[959]: time="2024-10-07 10:49:35.802837160Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 07 10:49:35 addons-952725 crio[959]: time="2024-10-07 10:49:35.831944655Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c7fd3b922d00c2b0b03a753fc49a48f6e96286c4fc6f4abcf2155f5e367632f3/merged/etc/passwd: no such file or directory"
	Oct 07 10:49:35 addons-952725 crio[959]: time="2024-10-07 10:49:35.832006842Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c7fd3b922d00c2b0b03a753fc49a48f6e96286c4fc6f4abcf2155f5e367632f3/merged/etc/group: no such file or directory"
	Oct 07 10:49:35 addons-952725 crio[959]: time="2024-10-07 10:49:35.873843408Z" level=info msg="Created container f35a376b619995aff93d859fb1a05a8144e04841385f115e73e691fc1a0b154a: default/hello-world-app-55bf9c44b4-m2tq6/hello-world-app" id=70d441bb-4d72-48b3-a98d-7d76872f8d8d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 07 10:49:35 addons-952725 crio[959]: time="2024-10-07 10:49:35.874628108Z" level=info msg="Starting container: f35a376b619995aff93d859fb1a05a8144e04841385f115e73e691fc1a0b154a" id=b914bdd6-9652-423f-87d7-0969d1042e53 name=/runtime.v1.RuntimeService/StartContainer
	Oct 07 10:49:35 addons-952725 crio[959]: time="2024-10-07 10:49:35.882109034Z" level=info msg="Started container" PID=13860 containerID=f35a376b619995aff93d859fb1a05a8144e04841385f115e73e691fc1a0b154a description=default/hello-world-app-55bf9c44b4-m2tq6/hello-world-app id=b914bdd6-9652-423f-87d7-0969d1042e53 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f923962a6ac46a50bca7b2cac196a5f56783a1ce073edb7a522e5d92734319ab
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	f35a376b61999       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   f923962a6ac46       hello-world-app-55bf9c44b4-m2tq6
	f810b25076c63       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          About a minute ago       Running             busybox                   0                   0c88881f09ada       busybox
	4956146809a68       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago            Running             nginx                     0                   c48674f24f4ff       nginx
	71ab075263c7f       registry.k8s.io/ingress-nginx/controller@sha256:22f9d129ae8c89a2cabbd13af3c1668944f3dd68fec186199b7024a0a2fc75b3             12 minutes ago           Running             controller                0                   a9664ef115034       ingress-nginx-controller-bc57996ff-vdk9h
	5e5de2fa02e8e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   13 minutes ago           Exited              patch                     0                   93c371384f624       ingress-nginx-admission-patch-ppdgq
	a10b2cf1ed463       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   13 minutes ago           Exited              create                    0                   4d058b31939d0       ingress-nginx-admission-create-zzdln
	ecd948239ed25       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             13 minutes ago           Running             minikube-ingress-dns      0                   428eed4a474cf       kube-ingress-dns-minikube
	774aa5975d8da       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             13 minutes ago           Running             local-path-provisioner    0                   fe47699131a23       local-path-provisioner-86d989889c-kfkrw
	2a59e268f75cf       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        13 minutes ago           Running             metrics-server            0                   acd50e2015e95       metrics-server-84c5f94fbc-6vc27
	c313b70ac4ef3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             13 minutes ago           Running             storage-provisioner       0                   364a50ef72e82       storage-provisioner
	dd456422afb90       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             13 minutes ago           Running             coredns                   0                   680d1bfd238f1       coredns-7c65d6cfc9-6b52m
	9bbd85a195c8e       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                             14 minutes ago           Running             kube-proxy                0                   97aad492f0b39       kube-proxy-9dvhw
	a841da971afdb       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                             14 minutes ago           Running             kindnet-cni               0                   741d891f245dd       kindnet-57hzc
	a3303a3c982ff       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                             14 minutes ago           Running             kube-apiserver            0                   c008e76944ff3       kube-apiserver-addons-952725
	0345daaeb9c8f       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                             14 minutes ago           Running             kube-controller-manager   0                   d2914c35e07ac       kube-controller-manager-addons-952725
	21c0c933f1c0a       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                             14 minutes ago           Running             kube-scheduler            0                   0ec301299d3ac       kube-scheduler-addons-952725
	db4e405a62283       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             14 minutes ago           Running             etcd                      0                   efc0fac896569       etcd-addons-952725
	
	
	==> coredns [dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264] <==
	[INFO] 10.244.0.18:42557 - 18501 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002576947s
	[INFO] 10.244.0.18:42557 - 25615 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000124397s
	[INFO] 10.244.0.18:42557 - 57273 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000102645s
	[INFO] 10.244.0.18:37651 - 40853 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000115716s
	[INFO] 10.244.0.18:37651 - 41066 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000098198s
	[INFO] 10.244.0.18:48384 - 38548 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066108s
	[INFO] 10.244.0.18:48384 - 38368 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000082724s
	[INFO] 10.244.0.18:41966 - 10947 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000192203s
	[INFO] 10.244.0.18:41966 - 10519 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000199014s
	[INFO] 10.244.0.18:34670 - 46060 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001719002s
	[INFO] 10.244.0.18:34670 - 46266 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001778529s
	[INFO] 10.244.0.18:57585 - 31524 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000050551s
	[INFO] 10.244.0.18:57585 - 31360 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000159334s
	[INFO] 10.244.0.20:58218 - 55847 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000194591s
	[INFO] 10.244.0.20:42558 - 39002 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000142259s
	[INFO] 10.244.0.20:38016 - 55455 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00020822s
	[INFO] 10.244.0.20:44129 - 31610 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000137246s
	[INFO] 10.244.0.20:38924 - 31245 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000134514s
	[INFO] 10.244.0.20:40166 - 43261 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119778s
	[INFO] 10.244.0.20:35584 - 15192 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002170214s
	[INFO] 10.244.0.20:39579 - 46682 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001997046s
	[INFO] 10.244.0.20:40397 - 54315 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003842078s
	[INFO] 10.244.0.20:42074 - 28493 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001903221s
	[INFO] 10.244.0.22:53849 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000220388s
	[INFO] 10.244.0.22:33804 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000133406s
	
	
	==> describe nodes <==
	Name:               addons-952725
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-952725
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=addons-952725
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T10_34_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-952725
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:34:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-952725
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 10:49:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 10:49:03 +0000   Mon, 07 Oct 2024 10:34:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 10:49:03 +0000   Mon, 07 Oct 2024 10:34:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 10:49:03 +0000   Mon, 07 Oct 2024 10:34:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 10:49:03 +0000   Mon, 07 Oct 2024 10:35:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    addons-952725
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 667ded5c667f48beaa77c802f4949d98
	  System UUID:                89c65a8b-1435-4af7-a1ff-2983fc44da00
	  Boot ID:                    9a8fefe6-3962-4cb9-809a-2b740ac8992f
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     hello-world-app-55bf9c44b4-m2tq6            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-vdk9h    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         14m
	  kube-system                 coredns-7c65d6cfc9-6b52m                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-addons-952725                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-57hzc                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-addons-952725                250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-952725       200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-9dvhw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-952725                100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-84c5f94fbc-6vc27             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         14m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          local-path-provisioner-86d989889c-kfkrw     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             510Mi (6%)   220Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 14m   kube-proxy       
	  Normal   Starting                 14m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  14m   kubelet          Node addons-952725 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m   kubelet          Node addons-952725 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m   kubelet          Node addons-952725 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m   node-controller  Node addons-952725 event: Registered Node addons-952725 in Controller
	  Normal   NodeReady                13m   kubelet          Node addons-952725 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e] <==
	{"level":"info","ts":"2024-10-07T10:34:51.248430Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2024-10-07T10:34:51.248438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2024-10-07T10:34:51.251644Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T10:34:51.252486Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:addons-952725 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-07T10:34:51.252618Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T10:34:51.252856Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T10:34:51.253012Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T10:34:51.253076Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T10:34:51.253098Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T10:34:51.253696Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T10:34:51.254518Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2024-10-07T10:34:51.255269Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T10:34:51.264925Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-07T10:34:51.266572Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-07T10:34:51.266603Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-07T10:35:03.535075Z","caller":"traceutil/trace.go:171","msg":"trace[2132271953] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"284.96651ms","start":"2024-10-07T10:35:03.250093Z","end":"2024-10-07T10:35:03.535060Z","steps":["trace[2132271953] 'process raft request'  (duration: 284.86585ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T10:35:03.673545Z","caller":"traceutil/trace.go:171","msg":"trace[1695671777] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"188.156093ms","start":"2024-10-07T10:35:03.485379Z","end":"2024-10-07T10:35:03.673535Z","steps":["trace[1695671777] 'process raft request'  (duration: 140.968257ms)","trace[1695671777] 'compare'  (duration: 47.016137ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T10:35:03.673488Z","caller":"traceutil/trace.go:171","msg":"trace[214699764] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"166.83918ms","start":"2024-10-07T10:35:03.506636Z","end":"2024-10-07T10:35:03.673475Z","steps":["trace[214699764] 'process raft request'  (duration: 166.801207ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T10:35:03.878205Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T10:35:03.506608Z","time spent":"371.140045ms","remote":"127.0.0.1:56670","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3978,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-shtgd\" mod_revision:330 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-shtgd\" value_size:3919 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-shtgd\" > >"}
	{"level":"info","ts":"2024-10-07T10:35:03.931468Z","caller":"traceutil/trace.go:171","msg":"trace[732002138] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"201.577926ms","start":"2024-10-07T10:35:03.729870Z","end":"2024-10-07T10:35:03.931448Z","steps":["trace[732002138] 'process raft request'  (duration: 158.749519ms)","trace[732002138] 'compare'  (duration: 42.586169ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T10:35:04.868891Z","caller":"traceutil/trace.go:171","msg":"trace[1459735235] transaction","detail":"{read_only:false; response_revision:346; number_of_response:1; }","duration":"114.751638ms","start":"2024-10-07T10:35:04.754121Z","end":"2024-10-07T10:35:04.868873Z","steps":["trace[1459735235] 'process raft request'  (duration: 114.599114ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T10:35:06.268722Z","caller":"traceutil/trace.go:171","msg":"trace[2118704945] transaction","detail":"{read_only:false; response_revision:358; number_of_response:1; }","duration":"120.02973ms","start":"2024-10-07T10:35:06.148673Z","end":"2024-10-07T10:35:06.268702Z","steps":["trace[2118704945] 'process raft request'  (duration: 115.764183ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T10:44:52.166972Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1473}
	{"level":"info","ts":"2024-10-07T10:44:52.198132Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1473,"took":"30.736316ms","hash":748900064,"current-db-size-bytes":6008832,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":3014656,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2024-10-07T10:44:52.198199Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":748900064,"revision":1473,"compact-revision":-1}
	
	
	==> kernel <==
	 10:49:36 up  6:32,  0 users,  load average: 0.80, 0.56, 1.29
	Linux addons-952725 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273] <==
	I1007 10:47:35.673125       1 main.go:299] handling current node
	I1007 10:47:45.674067       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:47:45.674103       1 main.go:299] handling current node
	I1007 10:47:55.676680       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:47:55.676720       1 main.go:299] handling current node
	I1007 10:48:05.673122       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:48:05.673238       1 main.go:299] handling current node
	I1007 10:48:15.674166       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:48:15.674206       1 main.go:299] handling current node
	I1007 10:48:25.680109       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:48:25.680144       1 main.go:299] handling current node
	I1007 10:48:35.672503       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:48:35.672656       1 main.go:299] handling current node
	I1007 10:48:45.672934       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:48:45.672971       1 main.go:299] handling current node
	I1007 10:48:55.681395       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:48:55.681434       1 main.go:299] handling current node
	I1007 10:49:05.672276       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:49:05.672310       1 main.go:299] handling current node
	I1007 10:49:15.673746       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:49:15.673880       1 main.go:299] handling current node
	I1007 10:49:25.681777       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:49:25.681811       1 main.go:299] handling current node
	I1007 10:49:35.672723       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:49:35.672756       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a] <==
	 > logger="UnhandledError"
	E1007 10:36:56.227761       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1007 10:36:56.237336       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1007 10:45:58.327959       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1007 10:46:24.302888       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.238.130"}
	I1007 10:46:27.302998       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1007 10:46:55.770138       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1007 10:46:56.666218       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 10:46:56.674488       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 10:46:56.707534       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 10:46:56.707690       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 10:46:56.770325       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 10:46:56.771113       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 10:46:56.846911       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 10:46:56.846961       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 10:46:56.850959       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 10:46:56.851001       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1007 10:46:57.848508       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1007 10:46:57.851773       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1007 10:46:57.954494       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1007 10:47:09.626739       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1007 10:47:10.667362       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1007 10:47:15.188377       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1007 10:47:15.491369       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.61.33"}
	I1007 10:49:34.709827       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.233.217"}
	
	
	==> kube-controller-manager [0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0] <==
	W1007 10:48:04.933198       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:48:04.933241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:48:06.640835       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:48:06.640876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:48:25.558514       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:48:25.558561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:48:27.450325       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:48:27.450366       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:48:47.586020       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:48:47.586064       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:48:49.579672       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:48:49.579716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:48:57.530679       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:48:57.530723       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 10:49:03.610522       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-952725"
	W1007 10:49:11.843276       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:49:11.843322       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:49:23.258175       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:49:23.258216       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 10:49:34.467952       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="39.450178ms"
	I1007 10:49:34.480571       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.517674ms"
	I1007 10:49:34.481131       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="46.957µs"
	I1007 10:49:34.497073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="47.901µs"
	I1007 10:49:36.190232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.271351ms"
	I1007 10:49:36.190405       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="41.46µs"
	
	
	==> kube-proxy [9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354] <==
	I1007 10:35:07.333905       1 server_linux.go:66] "Using iptables proxy"
	I1007 10:35:08.233196       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.58.2"]
	E1007 10:35:08.238207       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 10:35:08.292390       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1007 10:35:08.292522       1 server_linux.go:169] "Using iptables Proxier"
	I1007 10:35:08.294294       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 10:35:08.294746       1 server.go:483] "Version info" version="v1.31.1"
	I1007 10:35:08.294934       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 10:35:08.296089       1 config.go:199] "Starting service config controller"
	I1007 10:35:08.296168       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 10:35:08.296236       1 config.go:105] "Starting endpoint slice config controller"
	I1007 10:35:08.296564       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 10:35:08.298228       1 config.go:328] "Starting node config controller"
	I1007 10:35:08.298433       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 10:35:08.400005       1 shared_informer.go:320] Caches are synced for node config
	I1007 10:35:08.400237       1 shared_informer.go:320] Caches are synced for service config
	I1007 10:35:08.400284       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e] <==
	W1007 10:34:55.590927       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 10:34:55.590994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 10:34:55.591021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 10:34:55.591105       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 10:34:55.591114       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 10:34:55.591195       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:34:55.591165       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1007 10:34:55.591278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 10:34:55.591303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E1007 10:34:55.591289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 10:34:55.591408       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 10:34:55.591425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:34:55.591482       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 10:34:55.591498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 10:34:55.591490       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1007 10:34:55.591576       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1007 10:34:55.591594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1007 10:34:55.591574       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:34:55.591544       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 10:34:55.591671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:34:55.591068       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 10:34:55.591697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1007 10:34:55.590964       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 10:34:55.591717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1007 10:34:56.784424       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 10:48:07 addons-952725 kubelet[1502]: E1007 10:48:07.670108    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298087669859643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:578382,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:48:07 addons-952725 kubelet[1502]: E1007 10:48:07.670149    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298087669859643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:578382,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:48:17 addons-952725 kubelet[1502]: E1007 10:48:17.672850    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298097672603883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:578382,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:48:17 addons-952725 kubelet[1502]: E1007 10:48:17.672889    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298097672603883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:578382,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:48:20 addons-952725 kubelet[1502]: I1007 10:48:20.358486    1502 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 10:48:20 addons-952725 kubelet[1502]: E1007 10:48:20.359670    1502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="09d43b25-cfc7-418e-8afe-3f374b584082"
	Oct 07 10:48:27 addons-952725 kubelet[1502]: E1007 10:48:27.675792    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298107675532345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:578382,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:48:27 addons-952725 kubelet[1502]: E1007 10:48:27.675832    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298107675532345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:578382,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:48:31 addons-952725 kubelet[1502]: I1007 10:48:31.358338    1502 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 10:48:35 addons-952725 kubelet[1502]: I1007 10:48:35.024871    1502 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 10:48:35 addons-952725 kubelet[1502]: I1007 10:48:35.037373    1502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=78.143648087 podStartE2EDuration="1m20.037353181s" podCreationTimestamp="2024-10-07 10:47:15 +0000 UTC" firstStartedPulling="2024-10-07 10:47:15.755283945 +0000 UTC m=+738.555854398" lastFinishedPulling="2024-10-07 10:47:17.648989039 +0000 UTC m=+740.449559492" observedRunningTime="2024-10-07 10:47:17.865067705 +0000 UTC m=+740.665638158" watchObservedRunningTime="2024-10-07 10:48:35.037353181 +0000 UTC m=+817.837923634"
	Oct 07 10:48:37 addons-952725 kubelet[1502]: E1007 10:48:37.678321    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298117678051021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:48:37 addons-952725 kubelet[1502]: E1007 10:48:37.678363    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298117678051021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:48:47 addons-952725 kubelet[1502]: E1007 10:48:47.681455    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298127681189812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:48:47 addons-952725 kubelet[1502]: E1007 10:48:47.681487    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298127681189812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:48:57 addons-952725 kubelet[1502]: E1007 10:48:57.684085    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298137683794655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:48:57 addons-952725 kubelet[1502]: E1007 10:48:57.684587    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298137683794655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:49:07 addons-952725 kubelet[1502]: E1007 10:49:07.691975    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298147689859884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:49:07 addons-952725 kubelet[1502]: E1007 10:49:07.692500    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298147689859884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:49:17 addons-952725 kubelet[1502]: E1007 10:49:17.694871    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298157694627170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:49:17 addons-952725 kubelet[1502]: E1007 10:49:17.694907    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298157694627170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:49:27 addons-952725 kubelet[1502]: E1007 10:49:27.697543    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298167697308498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:49:27 addons-952725 kubelet[1502]: E1007 10:49:27.697585    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298167697308498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587707,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:49:34 addons-952725 kubelet[1502]: I1007 10:49:34.467277    1502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=60.898690846 podStartE2EDuration="12m3.467257246s" podCreationTimestamp="2024-10-07 10:37:31 +0000 UTC" firstStartedPulling="2024-10-07 10:37:31.658580813 +0000 UTC m=+154.459151265" lastFinishedPulling="2024-10-07 10:48:34.227147212 +0000 UTC m=+817.027717665" observedRunningTime="2024-10-07 10:48:35.03828156 +0000 UTC m=+817.838852013" watchObservedRunningTime="2024-10-07 10:49:34.467257246 +0000 UTC m=+877.267827699"
	Oct 07 10:49:34 addons-952725 kubelet[1502]: I1007 10:49:34.645919    1502 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj6qq\" (UniqueName: \"kubernetes.io/projected/2ff43eae-491b-4ce4-823c-b7078bbb835a-kube-api-access-xj6qq\") pod \"hello-world-app-55bf9c44b4-m2tq6\" (UID: \"2ff43eae-491b-4ce4-823c-b7078bbb835a\") " pod="default/hello-world-app-55bf9c44b4-m2tq6"
	
	
	==> storage-provisioner [c313b70ac4ef3b6236c0631c0052cfe4654849c5b111986e269fc26359577187] <==
	I1007 10:35:46.887649       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 10:35:46.902053       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 10:35:46.902272       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 10:35:46.912935       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 10:35:46.913198       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-952725_5cbfe9d8-3535-413a-83a5-d5ab752ef72f!
	I1007 10:35:46.914206       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e74caa12-d903-4476-a949-8f0d8e425f00", APIVersion:"v1", ResourceVersion:"879", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-952725_5cbfe9d8-3535-413a-83a5-d5ab752ef72f became leader
	I1007 10:35:47.014065       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-952725_5cbfe9d8-3535-413a-83a5-d5ab752ef72f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-952725 -n addons-952725
helpers_test.go:261: (dbg) Run:  kubectl --context addons-952725 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-zzdln ingress-nginx-admission-patch-ppdgq
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-952725 describe pod ingress-nginx-admission-create-zzdln ingress-nginx-admission-patch-ppdgq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-952725 describe pod ingress-nginx-admission-create-zzdln ingress-nginx-admission-patch-ppdgq: exit status 1 (81.727135ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zzdln" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ppdgq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-952725 describe pod ingress-nginx-admission-create-zzdln ingress-nginx-admission-patch-ppdgq: exit status 1
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-952725 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-952725 addons disable ingress-dns --alsologtostderr -v=1: (1.521047914s)
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-952725 addons disable ingress --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-952725 addons disable ingress --alsologtostderr -v=1: (7.743305289s)
--- FAIL: TestAddons/parallel/Ingress (152.09s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (340.16s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.513608ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-6vc27" [72e58343-87c1-4934-82b9-e0757b74087f] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004258175s
addons_test.go:402: (dbg) Run:  kubectl --context addons-952725 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-952725 top pods -n kube-system: exit status 1 (95.060355ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6b52m, age: 11m44.322612244s

                                                
                                                
** /stderr **
I1007 10:46:46.325781  896726 retry.go:31] will retry after 2.885618433s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-952725 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-952725 top pods -n kube-system: exit status 1 (87.754002ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6b52m, age: 11m47.296827448s

                                                
                                                
** /stderr **
I1007 10:46:49.299520  896726 retry.go:31] will retry after 5.023129926s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-952725 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-952725 top pods -n kube-system: exit status 1 (96.076012ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6b52m, age: 11m52.417136149s

                                                
                                                
** /stderr **
I1007 10:46:54.419941  896726 retry.go:31] will retry after 5.295959924s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-952725 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-952725 top pods -n kube-system: exit status 1 (95.143039ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6b52m, age: 11m57.810147255s

                                                
                                                
** /stderr **
I1007 10:46:59.813191  896726 retry.go:31] will retry after 7.5008901s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-952725 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-952725 top pods -n kube-system: exit status 1 (88.312079ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6b52m, age: 12m5.399509696s

                                                
                                                
** /stderr **
I1007 10:47:07.403480  896726 retry.go:31] will retry after 21.320323557s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-952725 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-952725 top pods -n kube-system: exit status 1 (88.779819ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6b52m, age: 12m26.813456425s

                                                
                                                
** /stderr **
I1007 10:47:28.816495  896726 retry.go:31] will retry after 15.615345479s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-952725 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-952725 top pods -n kube-system: exit status 1 (88.194381ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6b52m, age: 12m42.517676936s

                                                
                                                
** /stderr **
I1007 10:47:44.520723  896726 retry.go:31] will retry after 36.842979771s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-952725 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-952725 top pods -n kube-system: exit status 1 (90.764902ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6b52m, age: 13m19.453900312s

                                                
                                                
** /stderr **
I1007 10:48:21.456454  896726 retry.go:31] will retry after 57.559979516s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-952725 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-952725 top pods -n kube-system: exit status 1 (88.243439ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6b52m, age: 14m17.100699569s

                                                
                                                
** /stderr **
I1007 10:49:19.105237  896726 retry.go:31] will retry after 1m19.997821324s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-952725 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-952725 top pods -n kube-system: exit status 1 (95.818138ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6b52m, age: 15m37.19917001s

                                                
                                                
** /stderr **
I1007 10:50:39.202177  896726 retry.go:31] will retry after 1m2.434244434s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-952725 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-952725 top pods -n kube-system: exit status 1 (94.909725ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6b52m, age: 16m39.727139506s

                                                
                                                
** /stderr **
I1007 10:51:41.731715  896726 retry.go:31] will retry after 35.575903626s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-952725 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-952725 top pods -n kube-system: exit status 1 (88.798867ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-6b52m, age: 17m15.392105312s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-952725
helpers_test.go:235: (dbg) docker inspect addons-952725:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "85436f7341a901b222d5828da34ca95e82246f0a4a81c195c0b5073b93bab718",
	        "Created": "2024-10-07T10:34:32.746029296Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 898077,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-07T10:34:32.896447337Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/85436f7341a901b222d5828da34ca95e82246f0a4a81c195c0b5073b93bab718/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/85436f7341a901b222d5828da34ca95e82246f0a4a81c195c0b5073b93bab718/hostname",
	        "HostsPath": "/var/lib/docker/containers/85436f7341a901b222d5828da34ca95e82246f0a4a81c195c0b5073b93bab718/hosts",
	        "LogPath": "/var/lib/docker/containers/85436f7341a901b222d5828da34ca95e82246f0a4a81c195c0b5073b93bab718/85436f7341a901b222d5828da34ca95e82246f0a4a81c195c0b5073b93bab718-json.log",
	        "Name": "/addons-952725",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-952725:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-952725",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3d8a7bcef2ac32f4df0c9013de3e47ad32d0e114b8350d154e406c848b627b66-init/diff:/var/lib/docker/overlay2/679cc8fccbb0902884eb141037cc21fc6e7a2efac609a53e07ea6b92675ef1c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3d8a7bcef2ac32f4df0c9013de3e47ad32d0e114b8350d154e406c848b627b66/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3d8a7bcef2ac32f4df0c9013de3e47ad32d0e114b8350d154e406c848b627b66/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3d8a7bcef2ac32f4df0c9013de3e47ad32d0e114b8350d154e406c848b627b66/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-952725",
	                "Source": "/var/lib/docker/volumes/addons-952725/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-952725",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-952725",
	                "name.minikube.sigs.k8s.io": "addons-952725",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cedd26adcb83d6591c102466cf3325c8c8d6a1f49982b318f48abf887874ea83",
	            "SandboxKey": "/var/run/docker/netns/cedd26adcb83",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33882"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33885"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33883"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33884"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-952725": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null,
	                    "NetworkID": "59038097841d5b49aa82228d19fa2dd63453f94c2643413b72a58a1768d2ccbc",
	                    "EndpointID": "49d7f5aa381f835679c25b2e09022cf01d0aadb939f06563e6481ef28f4c53bd",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-952725",
	                        "85436f7341a9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-952725 -n addons-952725
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-952725 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-952725 logs -n 25: (1.351105473s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-457065 | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC |                     |
	|         | download-docker-457065                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-457065                                                                   | download-docker-457065 | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC | 07 Oct 24 10:34 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-858793   | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC |                     |
	|         | binary-mirror-858793                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33319                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-858793                                                                     | binary-mirror-858793   | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC | 07 Oct 24 10:34 UTC |
	| addons  | enable dashboard -p                                                                         | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC |                     |
	|         | addons-952725                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC |                     |
	|         | addons-952725                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-952725 --wait=true                                                                | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC | 07 Oct 24 10:37 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-952725 addons disable                                                                | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:37 UTC | 07 Oct 24 10:37 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-952725 addons disable                                                                | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:45 UTC | 07 Oct 24 10:45 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-952725 addons disable                                                                | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:45 UTC | 07 Oct 24 10:45 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ip      | addons-952725 ip                                                                            | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:45 UTC | 07 Oct 24 10:45 UTC |
	| addons  | addons-952725 addons disable                                                                | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:45 UTC | 07 Oct 24 10:45 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:46 UTC | 07 Oct 24 10:46 UTC |
	|         | -p addons-952725                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-952725 ssh cat                                                                       | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:46 UTC | 07 Oct 24 10:46 UTC |
	|         | /opt/local-path-provisioner/pvc-3e4c90e6-24f8-4bf4-8b84-84508c280cb4_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-952725 addons disable                                                                | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:46 UTC | 07 Oct 24 10:46 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-952725 addons                                                                        | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:46 UTC | 07 Oct 24 10:46 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:46 UTC | 07 Oct 24 10:46 UTC |
	|         | -p addons-952725                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-952725 addons disable                                                                | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:46 UTC | 07 Oct 24 10:46 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-952725 addons                                                                        | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:46 UTC | 07 Oct 24 10:46 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-952725 addons                                                                        | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:46 UTC | 07 Oct 24 10:47 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-952725 addons                                                                        | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:47 UTC | 07 Oct 24 10:47 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-952725 ssh curl -s                                                                   | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:47 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-952725 ip                                                                            | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:49 UTC | 07 Oct 24 10:49 UTC |
	| addons  | addons-952725 addons disable                                                                | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:49 UTC | 07 Oct 24 10:49 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-952725 addons disable                                                                | addons-952725          | jenkins | v1.34.0 | 07 Oct 24 10:49 UTC | 07 Oct 24 10:49 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 10:34:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 10:34:25.854356  897600 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:34:25.854565  897600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:34:25.854592  897600 out.go:358] Setting ErrFile to fd 2...
	I1007 10:34:25.854613  897600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:34:25.854877  897600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-891319/.minikube/bin
	I1007 10:34:25.855346  897600 out.go:352] Setting JSON to false
	I1007 10:34:25.856364  897600 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22610,"bootTime":1728274656,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1007 10:34:25.856454  897600 start.go:139] virtualization:  
	I1007 10:34:25.858909  897600 out.go:177] * [addons-952725] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 10:34:25.860971  897600 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 10:34:25.861095  897600 notify.go:220] Checking for updates...
	I1007 10:34:25.864837  897600 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:34:25.866677  897600 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-891319/kubeconfig
	I1007 10:34:25.868665  897600 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-891319/.minikube
	I1007 10:34:25.870395  897600 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 10:34:25.872291  897600 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 10:34:25.874451  897600 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:34:25.897794  897600 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 10:34:25.897932  897600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 10:34:25.971117  897600 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:54 SystemTime:2024-10-07 10:34:25.961029087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 10:34:25.971244  897600 docker.go:318] overlay module found
	I1007 10:34:25.973837  897600 out.go:177] * Using the docker driver based on user configuration
	I1007 10:34:25.976446  897600 start.go:297] selected driver: docker
	I1007 10:34:25.976468  897600 start.go:901] validating driver "docker" against <nil>
	I1007 10:34:25.976483  897600 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 10:34:25.977096  897600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 10:34:26.031963  897600 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:54 SystemTime:2024-10-07 10:34:26.022332122 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 10:34:26.032198  897600 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 10:34:26.032474  897600 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:34:26.034196  897600 out.go:177] * Using Docker driver with root privileges
	I1007 10:34:26.036119  897600 cni.go:84] Creating CNI manager for ""
	I1007 10:34:26.036185  897600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 10:34:26.036202  897600 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 10:34:26.036333  897600 start.go:340] cluster config:
	{Name:addons-952725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-952725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:34:26.038795  897600 out.go:177] * Starting "addons-952725" primary control-plane node in "addons-952725" cluster
	I1007 10:34:26.040746  897600 cache.go:121] Beginning downloading kic base image for docker with crio
	I1007 10:34:26.042550  897600 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 10:34:26.044710  897600 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:34:26.044780  897600 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-891319/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1007 10:34:26.044793  897600 cache.go:56] Caching tarball of preloaded images
	I1007 10:34:26.044800  897600 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 10:34:26.044874  897600 preload.go:172] Found /home/jenkins/minikube-integration/19761-891319/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1007 10:34:26.044896  897600 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:34:26.045238  897600 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/config.json ...
	I1007 10:34:26.045298  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/config.json: {Name:mkf59b6592a952b92c7d864078e51df503121f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:26.063053  897600 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1007 10:34:26.063079  897600 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1007 10:34:26.063095  897600 cache.go:194] Successfully downloaded all kic artifacts
	I1007 10:34:26.063119  897600 start.go:360] acquireMachinesLock for addons-952725: {Name:mkcdedd8717c093b45d2d5295616e9bf83c44502 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 10:34:26.063611  897600 start.go:364] duration metric: took 472.086µs to acquireMachinesLock for "addons-952725"
	I1007 10:34:26.063647  897600 start.go:93] Provisioning new machine with config: &{Name:addons-952725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-952725 Namespace:default APIServerHAVIP: APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:34:26.063737  897600 start.go:125] createHost starting for "" (driver="docker")
	I1007 10:34:26.066109  897600 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1007 10:34:26.066406  897600 start.go:159] libmachine.API.Create for "addons-952725" (driver="docker")
	I1007 10:34:26.066456  897600 client.go:168] LocalClient.Create starting
	I1007 10:34:26.066588  897600 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem
	I1007 10:34:26.704392  897600 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/cert.pem
	I1007 10:34:27.261849  897600 cli_runner.go:164] Run: docker network inspect addons-952725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1007 10:34:27.276764  897600 cli_runner.go:211] docker network inspect addons-952725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1007 10:34:27.276852  897600 network_create.go:284] running [docker network inspect addons-952725] to gather additional debugging logs...
	I1007 10:34:27.276875  897600 cli_runner.go:164] Run: docker network inspect addons-952725
	W1007 10:34:27.290121  897600 cli_runner.go:211] docker network inspect addons-952725 returned with exit code 1
	I1007 10:34:27.290155  897600 network_create.go:287] error running [docker network inspect addons-952725]: docker network inspect addons-952725: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-952725 not found
	I1007 10:34:27.290170  897600 network_create.go:289] output of [docker network inspect addons-952725]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-952725 not found
	
	** /stderr **
	I1007 10:34:27.290269  897600 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 10:34:27.303965  897600 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fa98f111c271 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:cf:52:8b:17} reservation:<nil>}
	I1007 10:34:27.304436  897600 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004d78e0}
	I1007 10:34:27.304464  897600 network_create.go:124] attempt to create docker network addons-952725 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1007 10:34:27.304520  897600 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-952725 addons-952725
	I1007 10:34:27.374295  897600 network_create.go:108] docker network addons-952725 192.168.58.0/24 created
	I1007 10:34:27.374327  897600 kic.go:121] calculated static IP "192.168.58.2" for the "addons-952725" container
	I1007 10:34:27.374411  897600 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1007 10:34:27.388673  897600 cli_runner.go:164] Run: docker volume create addons-952725 --label name.minikube.sigs.k8s.io=addons-952725 --label created_by.minikube.sigs.k8s.io=true
	I1007 10:34:27.405988  897600 oci.go:103] Successfully created a docker volume addons-952725
	I1007 10:34:27.406098  897600 cli_runner.go:164] Run: docker run --rm --name addons-952725-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-952725 --entrypoint /usr/bin/test -v addons-952725:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib
	I1007 10:34:28.565642  897600 cli_runner.go:217] Completed: docker run --rm --name addons-952725-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-952725 --entrypoint /usr/bin/test -v addons-952725:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib: (1.159496599s)
	I1007 10:34:28.565673  897600 oci.go:107] Successfully prepared a docker volume addons-952725
	I1007 10:34:28.565692  897600 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:34:28.565712  897600 kic.go:194] Starting extracting preloaded images to volume ...
	I1007 10:34:28.565784  897600 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19761-891319/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-952725:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir
	I1007 10:34:32.670907  897600 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19761-891319/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-952725:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir: (4.105070585s)
	I1007 10:34:32.670939  897600 kic.go:203] duration metric: took 4.105224208s to extract preloaded images to volume ...
	W1007 10:34:32.671084  897600 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1007 10:34:32.671203  897600 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1007 10:34:32.732041  897600 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-952725 --name addons-952725 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-952725 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-952725 --network addons-952725 --ip 192.168.58.2 --volume addons-952725:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122
	I1007 10:34:33.105474  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Running}}
	I1007 10:34:33.131196  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:34:33.154921  897600 cli_runner.go:164] Run: docker exec addons-952725 stat /var/lib/dpkg/alternatives/iptables
	I1007 10:34:33.218329  897600 oci.go:144] the created container "addons-952725" has a running status.
	I1007 10:34:33.218361  897600 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa...
	I1007 10:34:33.625637  897600 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1007 10:34:33.656580  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:34:33.679580  897600 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1007 10:34:33.679605  897600 kic_runner.go:114] Args: [docker exec --privileged addons-952725 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1007 10:34:33.781306  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:34:33.805652  897600 machine.go:93] provisionDockerMachine start ...
	I1007 10:34:33.805744  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:34:33.828114  897600 main.go:141] libmachine: Using SSH client type: native
	I1007 10:34:33.828534  897600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I1007 10:34:33.828556  897600 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 10:34:34.000558  897600 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-952725
	
	I1007 10:34:34.000612  897600 ubuntu.go:169] provisioning hostname "addons-952725"
	I1007 10:34:34.000688  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:34:34.040547  897600 main.go:141] libmachine: Using SSH client type: native
	I1007 10:34:34.040808  897600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I1007 10:34:34.040824  897600 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-952725 && echo "addons-952725" | sudo tee /etc/hostname
	I1007 10:34:34.202347  897600 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-952725
	
	I1007 10:34:34.202429  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:34:34.223128  897600 main.go:141] libmachine: Using SSH client type: native
	I1007 10:34:34.223385  897600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I1007 10:34:34.223406  897600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-952725' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-952725/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-952725' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 10:34:34.364938  897600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 10:34:34.364969  897600 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19761-891319/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-891319/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-891319/.minikube}
	I1007 10:34:34.364993  897600 ubuntu.go:177] setting up certificates
	I1007 10:34:34.365005  897600 provision.go:84] configureAuth start
	I1007 10:34:34.365070  897600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-952725
	I1007 10:34:34.383190  897600 provision.go:143] copyHostCerts
	I1007 10:34:34.383283  897600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-891319/.minikube/cert.pem (1123 bytes)
	I1007 10:34:34.383408  897600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-891319/.minikube/key.pem (1679 bytes)
	I1007 10:34:34.383471  897600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-891319/.minikube/ca.pem (1078 bytes)
	I1007 10:34:34.383520  897600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca-key.pem org=jenkins.addons-952725 san=[127.0.0.1 192.168.58.2 addons-952725 localhost minikube]
	I1007 10:34:35.214306  897600 provision.go:177] copyRemoteCerts
	I1007 10:34:35.214377  897600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 10:34:35.214434  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:34:35.231529  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:34:35.329338  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1007 10:34:35.354596  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 10:34:35.378606  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 10:34:35.403609  897600 provision.go:87] duration metric: took 1.038589196s to configureAuth
	I1007 10:34:35.403634  897600 ubuntu.go:193] setting minikube options for container-runtime
	I1007 10:34:35.403815  897600 config.go:182] Loaded profile config "addons-952725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:34:35.403920  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:34:35.421209  897600 main.go:141] libmachine: Using SSH client type: native
	I1007 10:34:35.421476  897600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33881 <nil> <nil>}
	I1007 10:34:35.421501  897600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 10:34:35.691031  897600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 10:34:35.691057  897600 machine.go:96] duration metric: took 1.885383979s to provisionDockerMachine
	I1007 10:34:35.691068  897600 client.go:171] duration metric: took 9.624601333s to LocalClient.Create
	I1007 10:34:35.691080  897600 start.go:167] duration metric: took 9.624675999s to libmachine.API.Create "addons-952725"
	I1007 10:34:35.691087  897600 start.go:293] postStartSetup for "addons-952725" (driver="docker")
	I1007 10:34:35.691098  897600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 10:34:35.691166  897600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 10:34:35.691215  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:34:35.708690  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:34:35.805535  897600 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 10:34:35.808973  897600 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 10:34:35.809010  897600 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 10:34:35.809022  897600 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 10:34:35.809030  897600 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 10:34:35.809041  897600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-891319/.minikube/addons for local assets ...
	I1007 10:34:35.809111  897600 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-891319/.minikube/files for local assets ...
	I1007 10:34:35.809140  897600 start.go:296] duration metric: took 118.047143ms for postStartSetup
	I1007 10:34:35.809468  897600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-952725
	I1007 10:34:35.825970  897600 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/config.json ...
	I1007 10:34:35.826256  897600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 10:34:35.826321  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:34:35.842154  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:34:35.937705  897600 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 10:34:35.941963  897600 start.go:128] duration metric: took 9.878201327s to createHost
	I1007 10:34:35.941989  897600 start.go:83] releasing machines lock for "addons-952725", held for 9.878361146s
	I1007 10:34:35.942059  897600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-952725
	I1007 10:34:35.957804  897600 ssh_runner.go:195] Run: cat /version.json
	I1007 10:34:35.957856  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:34:35.957871  897600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 10:34:35.957946  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:34:35.976535  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:34:35.978203  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:34:36.201925  897600 ssh_runner.go:195] Run: systemctl --version
	I1007 10:34:36.206364  897600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 10:34:36.351874  897600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 10:34:36.355980  897600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:34:36.376054  897600 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1007 10:34:36.376132  897600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 10:34:36.412681  897600 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1007 10:34:36.412704  897600 start.go:495] detecting cgroup driver to use...
	I1007 10:34:36.412767  897600 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 10:34:36.412849  897600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 10:34:36.428959  897600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 10:34:36.441124  897600 docker.go:217] disabling cri-docker service (if available) ...
	I1007 10:34:36.441253  897600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 10:34:36.455596  897600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 10:34:36.471253  897600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 10:34:36.563469  897600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 10:34:36.651979  897600 docker.go:233] disabling docker service ...
	I1007 10:34:36.652098  897600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 10:34:36.674644  897600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 10:34:36.686862  897600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 10:34:36.774305  897600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 10:34:36.871817  897600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 10:34:36.883806  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 10:34:36.900022  897600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 10:34:36.900117  897600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:34:36.909884  897600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 10:34:36.909976  897600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:34:36.920181  897600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:34:36.930152  897600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:34:36.940174  897600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 10:34:36.949415  897600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:34:36.959126  897600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:34:36.974614  897600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 10:34:36.984623  897600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 10:34:36.993100  897600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 10:34:37.001596  897600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:34:37.104150  897600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 10:34:37.230722  897600 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 10:34:37.230828  897600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 10:34:37.235089  897600 start.go:563] Will wait 60s for crictl version
	I1007 10:34:37.235151  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:34:37.238496  897600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 10:34:37.282863  897600 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1007 10:34:37.283022  897600 ssh_runner.go:195] Run: crio --version
	I1007 10:34:37.322568  897600 ssh_runner.go:195] Run: crio --version
	I1007 10:34:37.364316  897600 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1007 10:34:37.366598  897600 cli_runner.go:164] Run: docker network inspect addons-952725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 10:34:37.385012  897600 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1007 10:34:37.388707  897600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:34:37.399314  897600 kubeadm.go:883] updating cluster {Name:addons-952725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-952725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 10:34:37.399439  897600 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:34:37.399500  897600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:34:37.479505  897600 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 10:34:37.479530  897600 crio.go:433] Images already preloaded, skipping extraction
	I1007 10:34:37.479599  897600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 10:34:37.520982  897600 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 10:34:37.521007  897600 cache_images.go:84] Images are preloaded, skipping loading
	I1007 10:34:37.521016  897600 kubeadm.go:934] updating node { 192.168.58.2 8443 v1.31.1 crio true true} ...
	I1007 10:34:37.521121  897600 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-952725 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-952725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 10:34:37.521205  897600 ssh_runner.go:195] Run: crio config
	I1007 10:34:37.578407  897600 cni.go:84] Creating CNI manager for ""
	I1007 10:34:37.578430  897600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 10:34:37.578440  897600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 10:34:37.578463  897600 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-952725 NodeName:addons-952725 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 10:34:37.578653  897600 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-952725"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 10:34:37.578731  897600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 10:34:37.587610  897600 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 10:34:37.587699  897600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 10:34:37.596427  897600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1007 10:34:37.618772  897600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 10:34:37.637690  897600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I1007 10:34:37.655523  897600 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1007 10:34:37.658796  897600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 10:34:37.669252  897600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:34:37.761285  897600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:34:37.774877  897600 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725 for IP: 192.168.58.2
	I1007 10:34:37.774899  897600 certs.go:194] generating shared ca certs ...
	I1007 10:34:37.774916  897600 certs.go:226] acquiring lock for ca certs: {Name:mkd5251b1f18df70f58bf1f19694372431d4d649 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:37.775091  897600 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-891319/.minikube/ca.key
	I1007 10:34:38.305463  897600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt ...
	I1007 10:34:38.305497  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt: {Name:mk93666416897e119b1c7611486743cb173bf559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:38.306130  897600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-891319/.minikube/ca.key ...
	I1007 10:34:38.306148  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/ca.key: {Name:mk7769eb9aca2ab5423ab4e9e83760bf16b8dd6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:38.306610  897600 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.key
	I1007 10:34:38.517389  897600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.crt ...
	I1007 10:34:38.517420  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.crt: {Name:mk9593b4c767762d1540e6bc17c44307405443b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:38.517599  897600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.key ...
	I1007 10:34:38.517612  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.key: {Name:mk04f7de9ef03f0008a9de352a11a4dbd27d9456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:38.517696  897600 certs.go:256] generating profile certs ...
	I1007 10:34:38.517755  897600 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.key
	I1007 10:34:38.517784  897600 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt with IP's: []
	I1007 10:34:39.103205  897600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt ...
	I1007 10:34:39.103242  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: {Name:mkd8c660c7eacf949a715ce338f3534540eca313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:39.103494  897600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.key ...
	I1007 10:34:39.103511  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.key: {Name:mkdd59bcfee1302dc0f4731e9ceb73455d6a260d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:39.103611  897600 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.key.602ce869
	I1007 10:34:39.103633  897600 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.crt.602ce869 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.58.2]
	I1007 10:34:39.636086  897600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.crt.602ce869 ...
	I1007 10:34:39.636117  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.crt.602ce869: {Name:mkf2607d021c46799287a2dc8937fc01202b5d2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:39.636720  897600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.key.602ce869 ...
	I1007 10:34:39.636737  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.key.602ce869: {Name:mke98d50a7eee0613c19d1f8c4a8cd3d3966dad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:39.636828  897600 certs.go:381] copying /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.crt.602ce869 -> /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.crt
	I1007 10:34:39.636907  897600 certs.go:385] copying /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.key.602ce869 -> /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.key
	I1007 10:34:39.636963  897600 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/proxy-client.key
	I1007 10:34:39.636984  897600 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/proxy-client.crt with IP's: []
	I1007 10:34:40.109374  897600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/proxy-client.crt ...
	I1007 10:34:40.109420  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/proxy-client.crt: {Name:mk5638c1eeee4f7a6321f7ae9ae10926fbd937ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:40.110062  897600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/proxy-client.key ...
	I1007 10:34:40.110086  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/proxy-client.key: {Name:mk487f6e978793460c0139297d52f64610384699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:40.110665  897600 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 10:34:40.110717  897600 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem (1078 bytes)
	I1007 10:34:40.110745  897600 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/cert.pem (1123 bytes)
	I1007 10:34:40.110777  897600 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/key.pem (1679 bytes)
	I1007 10:34:40.111501  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 10:34:40.142516  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 10:34:40.171141  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 10:34:40.198289  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 10:34:40.227477  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 10:34:40.253377  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 10:34:40.278121  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 10:34:40.302122  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 10:34:40.325789  897600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 10:34:40.349729  897600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 10:34:40.367170  897600 ssh_runner.go:195] Run: openssl version
	I1007 10:34:40.372438  897600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 10:34:40.381737  897600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:34:40.385164  897600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:34:40.385234  897600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 10:34:40.392059  897600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 10:34:40.401641  897600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 10:34:40.405030  897600 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 10:34:40.405079  897600 kubeadm.go:392] StartCluster: {Name:addons-952725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-952725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:34:40.405167  897600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 10:34:40.405233  897600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 10:34:40.445507  897600 cri.go:89] found id: ""
	I1007 10:34:40.445581  897600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 10:34:40.454344  897600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 10:34:40.463365  897600 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1007 10:34:40.463430  897600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 10:34:40.472286  897600 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 10:34:40.472309  897600 kubeadm.go:157] found existing configuration files:
	
	I1007 10:34:40.472389  897600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 10:34:40.481548  897600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 10:34:40.481615  897600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 10:34:40.490460  897600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 10:34:40.499424  897600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 10:34:40.499491  897600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 10:34:40.508366  897600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 10:34:40.516981  897600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 10:34:40.517046  897600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 10:34:40.525579  897600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 10:34:40.534285  897600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 10:34:40.534354  897600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 10:34:40.542734  897600 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1007 10:34:40.583862  897600 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 10:34:40.584366  897600 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 10:34:40.605024  897600 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1007 10:34:40.605098  897600 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1007 10:34:40.605137  897600 kubeadm.go:310] OS: Linux
	I1007 10:34:40.605185  897600 kubeadm.go:310] CGROUPS_CPU: enabled
	I1007 10:34:40.605237  897600 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1007 10:34:40.605289  897600 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1007 10:34:40.605344  897600 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1007 10:34:40.605394  897600 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1007 10:34:40.605448  897600 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1007 10:34:40.605496  897600 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1007 10:34:40.605547  897600 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1007 10:34:40.605596  897600 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1007 10:34:40.667637  897600 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 10:34:40.667753  897600 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 10:34:40.667849  897600 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 10:34:40.674746  897600 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 10:34:40.677687  897600 out.go:235]   - Generating certificates and keys ...
	I1007 10:34:40.677797  897600 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 10:34:40.677867  897600 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 10:34:41.340980  897600 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 10:34:41.964928  897600 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 10:34:42.554719  897600 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 10:34:43.086933  897600 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 10:34:43.634446  897600 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 10:34:43.634818  897600 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-952725 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1007 10:34:44.317873  897600 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 10:34:44.318200  897600 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-952725 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1007 10:34:44.638496  897600 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 10:34:45.692947  897600 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 10:34:46.401030  897600 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 10:34:46.401264  897600 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 10:34:46.952861  897600 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 10:34:47.595990  897600 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 10:34:47.853874  897600 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 10:34:48.592020  897600 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 10:34:48.931891  897600 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 10:34:48.932734  897600 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 10:34:48.936084  897600 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 10:34:48.938193  897600 out.go:235]   - Booting up control plane ...
	I1007 10:34:48.938296  897600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 10:34:48.938372  897600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 10:34:48.939168  897600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 10:34:48.954182  897600 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 10:34:48.962690  897600 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 10:34:48.962745  897600 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 10:34:49.074144  897600 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 10:34:49.074263  897600 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 10:34:50.075334  897600 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001299143s
	I1007 10:34:50.075427  897600 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 10:34:56.577729  897600 kubeadm.go:310] [api-check] The API server is healthy after 6.50235007s
	I1007 10:34:56.598557  897600 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 10:34:56.611630  897600 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 10:34:56.637098  897600 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 10:34:56.637305  897600 kubeadm.go:310] [mark-control-plane] Marking the node addons-952725 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 10:34:56.648447  897600 kubeadm.go:310] [bootstrap-token] Using token: cz42ba.qa48rmy52comk8ew
	I1007 10:34:56.650159  897600 out.go:235]   - Configuring RBAC rules ...
	I1007 10:34:56.650276  897600 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 10:34:56.655665  897600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 10:34:56.663923  897600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 10:34:56.667814  897600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 10:34:56.671515  897600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 10:34:56.676762  897600 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 10:34:56.985635  897600 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 10:34:57.425085  897600 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 10:34:57.986670  897600 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 10:34:57.987585  897600 kubeadm.go:310] 
	I1007 10:34:57.987661  897600 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 10:34:57.987667  897600 kubeadm.go:310] 
	I1007 10:34:57.987743  897600 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 10:34:57.987747  897600 kubeadm.go:310] 
	I1007 10:34:57.987772  897600 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 10:34:57.987830  897600 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 10:34:57.987880  897600 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 10:34:57.987885  897600 kubeadm.go:310] 
	I1007 10:34:57.987937  897600 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 10:34:57.987942  897600 kubeadm.go:310] 
	I1007 10:34:57.987989  897600 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 10:34:57.988001  897600 kubeadm.go:310] 
	I1007 10:34:57.988053  897600 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 10:34:57.988126  897600 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 10:34:57.988193  897600 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 10:34:57.988198  897600 kubeadm.go:310] 
	I1007 10:34:57.988299  897600 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 10:34:57.988375  897600 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 10:34:57.988380  897600 kubeadm.go:310] 
	I1007 10:34:57.988462  897600 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cz42ba.qa48rmy52comk8ew \
	I1007 10:34:57.988562  897600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e053423b4b9af82cd91e46de0bbe14eaf8715f10cf4af6e7a1673303d5155913 \
	I1007 10:34:57.988583  897600 kubeadm.go:310] 	--control-plane 
	I1007 10:34:57.988588  897600 kubeadm.go:310] 
	I1007 10:34:57.988882  897600 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 10:34:57.988892  897600 kubeadm.go:310] 
	I1007 10:34:57.988973  897600 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cz42ba.qa48rmy52comk8ew \
	I1007 10:34:57.989078  897600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e053423b4b9af82cd91e46de0bbe14eaf8715f10cf4af6e7a1673303d5155913 
	I1007 10:34:57.991771  897600 kubeadm.go:310] W1007 10:34:40.580006    1179 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 10:34:57.992071  897600 kubeadm.go:310] W1007 10:34:40.581256    1179 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 10:34:57.992305  897600 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1007 10:34:57.992417  897600 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 10:34:57.992442  897600 cni.go:84] Creating CNI manager for ""
	I1007 10:34:57.992453  897600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 10:34:57.994737  897600 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 10:34:57.996369  897600 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 10:34:58.001033  897600 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 10:34:58.001053  897600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 10:34:58.024975  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 10:34:58.292211  897600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 10:34:58.292417  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:34:58.292527  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-952725 minikube.k8s.io/updated_at=2024_10_07T10_34_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f minikube.k8s.io/name=addons-952725 minikube.k8s.io/primary=true
	I1007 10:34:58.414768  897600 ops.go:34] apiserver oom_adj: -16
	I1007 10:34:58.414985  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:34:58.915580  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:34:59.415833  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:34:59.915562  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:35:00.417978  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:35:00.915364  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:35:01.415119  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:35:01.915393  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:35:02.415839  897600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 10:35:02.524408  897600 kubeadm.go:1113] duration metric: took 4.232056186s to wait for elevateKubeSystemPrivileges
	I1007 10:35:02.524441  897600 kubeadm.go:394] duration metric: took 22.119365147s to StartCluster
	I1007 10:35:02.524459  897600 settings.go:142] acquiring lock: {Name:mka20a3e6b00d8e089bb672b1d6ff1f77b6f764a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:35:02.524583  897600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19761-891319/kubeconfig
	I1007 10:35:02.524946  897600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/kubeconfig: {Name:mk44557a7348260d019750a5a9dae3060b2fe543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:35:02.525609  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 10:35:02.525644  897600 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 10:35:02.525867  897600 config.go:182] Loaded profile config "addons-952725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:35:02.525915  897600 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1007 10:35:02.525990  897600 addons.go:69] Setting yakd=true in profile "addons-952725"
	I1007 10:35:02.526004  897600 addons.go:234] Setting addon yakd=true in "addons-952725"
	I1007 10:35:02.526030  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.526474  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.526962  897600 addons.go:69] Setting inspektor-gadget=true in profile "addons-952725"
	I1007 10:35:02.526986  897600 addons.go:234] Setting addon inspektor-gadget=true in "addons-952725"
	I1007 10:35:02.527011  897600 addons.go:69] Setting metrics-server=true in profile "addons-952725"
	I1007 10:35:02.527034  897600 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-952725"
	I1007 10:35:02.527062  897600 addons.go:234] Setting addon metrics-server=true in "addons-952725"
	I1007 10:35:02.527080  897600 addons.go:69] Setting gcp-auth=true in profile "addons-952725"
	I1007 10:35:02.527104  897600 mustload.go:65] Loading cluster: addons-952725
	I1007 10:35:02.527141  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.527266  897600 config.go:182] Loaded profile config "addons-952725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:35:02.527496  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.527651  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.528021  897600 addons.go:69] Setting ingress=true in profile "addons-952725"
	I1007 10:35:02.528040  897600 addons.go:234] Setting addon ingress=true in "addons-952725"
	I1007 10:35:02.528082  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.528523  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.533286  897600 addons.go:69] Setting ingress-dns=true in profile "addons-952725"
	I1007 10:35:02.533324  897600 addons.go:234] Setting addon ingress-dns=true in "addons-952725"
	I1007 10:35:02.533368  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.533844  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.540357  897600 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-952725"
	I1007 10:35:02.540424  897600 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-952725"
	I1007 10:35:02.540482  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.540986  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.548226  897600 out.go:177] * Verifying Kubernetes components...
	I1007 10:35:02.551743  897600 addons.go:69] Setting registry=true in profile "addons-952725"
	I1007 10:35:02.551780  897600 addons.go:234] Setting addon registry=true in "addons-952725"
	I1007 10:35:02.551814  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.552319  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.564160  897600 addons.go:69] Setting storage-provisioner=true in profile "addons-952725"
	I1007 10:35:02.564191  897600 addons.go:234] Setting addon storage-provisioner=true in "addons-952725"
	I1007 10:35:02.564239  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.564757  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.571827  897600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 10:35:02.594410  897600 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-952725"
	I1007 10:35:02.594480  897600 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-952725"
	I1007 10:35:02.594808  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.595099  897600 addons.go:69] Setting volumesnapshots=true in profile "addons-952725"
	I1007 10:35:02.595144  897600 addons.go:234] Setting addon volumesnapshots=true in "addons-952725"
	I1007 10:35:02.595187  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.595589  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.527018  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.615478  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.617139  897600 addons.go:69] Setting volcano=true in profile "addons-952725"
	I1007 10:35:02.617203  897600 addons.go:234] Setting addon volcano=true in "addons-952725"
	I1007 10:35:02.617266  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.617793  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.527026  897600 addons.go:69] Setting cloud-spanner=true in profile "addons-952725"
	I1007 10:35:02.635803  897600 addons.go:234] Setting addon cloud-spanner=true in "addons-952725"
	I1007 10:35:02.635880  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.527073  897600 addons.go:69] Setting default-storageclass=true in profile "addons-952725"
	I1007 10:35:02.641706  897600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-952725"
	I1007 10:35:02.642123  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.527066  897600 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-952725"
	I1007 10:35:02.670115  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.670602  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.680169  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.682894  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.711793  897600 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1007 10:35:02.713578  897600 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 10:35:02.713600  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1007 10:35:02.713663  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:02.764432  897600 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 10:35:02.764623  897600 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1007 10:35:02.769694  897600 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1007 10:35:02.771988  897600 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 10:35:02.773899  897600 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 10:35:02.774068  897600 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 10:35:02.774082  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 10:35:02.774144  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:02.800137  897600 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1007 10:35:02.805518  897600 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 10:35:02.805543  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1007 10:35:02.805611  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:02.806114  897600 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 10:35:02.806148  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1007 10:35:02.806204  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:02.839745  897600 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1007 10:35:02.839943  897600 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1007 10:35:02.841458  897600 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 10:35:02.841487  897600 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 10:35:02.841554  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:02.848054  897600 out.go:177]   - Using image docker.io/registry:2.8.3
	I1007 10:35:02.851778  897600 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1007 10:35:02.851878  897600 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1007 10:35:02.851952  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:02.866764  897600 addons.go:234] Setting addon default-storageclass=true in "addons-952725"
	I1007 10:35:02.866820  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.871123  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.875619  897600 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1007 10:35:02.878353  897600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1007 10:35:02.878378  897600 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1007 10:35:02.878448  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:02.904567  897600 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-952725"
	I1007 10:35:02.904674  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:02.905173  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:02.911002  897600 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1007 10:35:02.911025  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1007 10:35:02.911097  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	W1007 10:35:02.934821  897600 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1007 10:35:02.939169  897600 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1007 10:35:02.941107  897600 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1007 10:35:02.941133  897600 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1007 10:35:02.941210  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:02.942118  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:02.965910  897600 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1007 10:35:02.968956  897600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1007 10:35:02.969148  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:02.969938  897600 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1007 10:35:02.969951  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1007 10:35:02.970007  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:02.975800  897600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 10:35:02.976259  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 10:35:02.984888  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:02.988490  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:02.989042  897600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1007 10:35:02.993849  897600 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1007 10:35:02.996863  897600 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1007 10:35:03.002757  897600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1007 10:35:03.005336  897600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1007 10:35:03.012461  897600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1007 10:35:03.014735  897600 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1007 10:35:03.019463  897600 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1007 10:35:03.019508  897600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1007 10:35:03.019590  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:03.072600  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:03.075320  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:03.083799  897600 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1007 10:35:03.089314  897600 out.go:177]   - Using image docker.io/busybox:stable
	I1007 10:35:03.094587  897600 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 10:35:03.094615  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1007 10:35:03.094685  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:03.115359  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:03.133281  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:03.144073  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:03.144933  897600 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 10:35:03.144947  897600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 10:35:03.145007  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:03.145338  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:03.170693  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:03.173704  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:03.201878  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	W1007 10:35:03.202860  897600 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1007 10:35:03.202887  897600 retry.go:31] will retry after 220.911747ms: ssh: handshake failed: EOF
	I1007 10:35:03.376353  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 10:35:03.483504  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 10:35:03.489681  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 10:35:03.516936  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 10:35:03.537445  897600 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1007 10:35:03.537516  897600 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1007 10:35:03.586679  897600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 10:35:03.586739  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1007 10:35:03.592022  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 10:35:03.609876  897600 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1007 10:35:03.609947  897600 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1007 10:35:03.673885  897600 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1007 10:35:03.673955  897600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1007 10:35:03.686503  897600 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1007 10:35:03.686580  897600 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1007 10:35:03.693012  897600 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1007 10:35:03.693082  897600 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1007 10:35:03.700417  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1007 10:35:03.727830  897600 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1007 10:35:03.727902  897600 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1007 10:35:03.799625  897600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 10:35:03.799698  897600 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 10:35:03.810879  897600 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1007 10:35:03.810948  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1007 10:35:03.825451  897600 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1007 10:35:03.825526  897600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1007 10:35:03.833428  897600 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1007 10:35:03.833502  897600 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1007 10:35:03.889881  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 10:35:03.895171  897600 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1007 10:35:03.895281  897600 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1007 10:35:03.965772  897600 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1007 10:35:03.965850  897600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1007 10:35:03.967208  897600 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1007 10:35:03.967274  897600 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1007 10:35:03.970676  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1007 10:35:04.011302  897600 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 10:35:04.011387  897600 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 10:35:04.026501  897600 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1007 10:35:04.026578  897600 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1007 10:35:04.057745  897600 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1007 10:35:04.057826  897600 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1007 10:35:04.109056  897600 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1007 10:35:04.109132  897600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1007 10:35:04.155197  897600 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1007 10:35:04.155271  897600 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1007 10:35:04.174135  897600 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1007 10:35:04.174210  897600 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1007 10:35:04.201660  897600 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1007 10:35:04.201735  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1007 10:35:04.214759  897600 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1007 10:35:04.214835  897600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1007 10:35:04.245445  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 10:35:04.276108  897600 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 10:35:04.276180  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1007 10:35:04.279501  897600 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1007 10:35:04.279571  897600 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1007 10:35:04.336538  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1007 10:35:04.339381  897600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1007 10:35:04.339451  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1007 10:35:04.355299  897600 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1007 10:35:04.355377  897600 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1007 10:35:04.382326  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 10:35:04.412722  897600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1007 10:35:04.412799  897600 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1007 10:35:04.448799  897600 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1007 10:35:04.448869  897600 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1007 10:35:04.493273  897600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1007 10:35:04.493347  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1007 10:35:04.646126  897600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1007 10:35:04.646194  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1007 10:35:04.655675  897600 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 10:35:04.655735  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1007 10:35:04.815091  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 10:35:04.882845  897600 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 10:35:04.882879  897600 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1007 10:35:05.027386  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 10:35:05.939120  897600 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.962835449s)
	I1007 10:35:05.939195  897600 start.go:971] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1007 10:35:05.939368  897600 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.963549002s)
	I1007 10:35:05.940701  897600 node_ready.go:35] waiting up to 6m0s for node "addons-952725" to be "Ready" ...
	I1007 10:35:06.802060  897600 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-952725" context rescaled to 1 replicas
	I1007 10:35:07.957497  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:08.746978  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.369542883s)
	I1007 10:35:08.747015  897600 addons.go:475] Verifying addon ingress=true in "addons-952725"
	I1007 10:35:08.747405  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.263814258s)
	I1007 10:35:08.747586  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.257834912s)
	I1007 10:35:08.747639  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.230645666s)
	I1007 10:35:08.747715  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.155608157s)
	I1007 10:35:08.747768  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.04728501s)
	I1007 10:35:08.747828  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.85788165s)
	I1007 10:35:08.747911  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.77717219s)
	I1007 10:35:08.747925  897600 addons.go:475] Verifying addon registry=true in "addons-952725"
	I1007 10:35:08.748015  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.502498777s)
	I1007 10:35:08.748028  897600 addons.go:475] Verifying addon metrics-server=true in "addons-952725"
	I1007 10:35:08.748071  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.411461291s)
	I1007 10:35:08.749248  897600 out.go:177] * Verifying ingress addon...
	I1007 10:35:08.749323  897600 out.go:177] * Verifying registry addon...
	I1007 10:35:08.750430  897600 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-952725 service yakd-dashboard -n yakd-dashboard
	
	I1007 10:35:08.751315  897600 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1007 10:35:08.751352  897600 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1007 10:35:08.766074  897600 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 10:35:08.766134  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:08.772734  897600 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1007 10:35:08.772755  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1007 10:35:08.778735  897600 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1007 10:35:08.793893  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.411481213s)
	W1007 10:35:08.794230  897600 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 10:35:08.794282  897600 retry.go:31] will retry after 125.505555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 10:35:08.794064  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.978939003s)
	I1007 10:35:08.920327  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 10:35:09.182721  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.155233855s)
	I1007 10:35:09.182756  897600 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-952725"
	I1007 10:35:09.184794  897600 out.go:177] * Verifying csi-hostpath-driver addon...
	I1007 10:35:09.187263  897600 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1007 10:35:09.201379  897600 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 10:35:09.201406  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:09.304635  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:09.306552  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:09.691974  897600 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 10:35:09.692055  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:09.792859  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:09.793067  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:10.192620  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:10.256668  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:10.257569  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:10.444924  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:10.692103  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:10.792043  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:10.794090  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:11.191914  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:11.255246  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:11.256911  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:11.696569  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:11.761158  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:11.761884  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:11.961572  897600 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1007 10:35:11.961677  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:11.985927  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:12.153214  897600 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1007 10:35:12.190014  897600 addons.go:234] Setting addon gcp-auth=true in "addons-952725"
	I1007 10:35:12.190079  897600 host.go:66] Checking if "addons-952725" exists ...
	I1007 10:35:12.190558  897600 cli_runner.go:164] Run: docker container inspect addons-952725 --format={{.State.Status}}
	I1007 10:35:12.199361  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:12.208710  897600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.288326081s)
	I1007 10:35:12.220493  897600 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1007 10:35:12.220551  897600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952725
	I1007 10:35:12.242276  897600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33881 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/addons-952725/id_rsa Username:docker}
	I1007 10:35:12.262363  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:12.264013  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:12.362417  897600 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1007 10:35:12.364386  897600 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 10:35:12.366124  897600 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1007 10:35:12.366178  897600 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1007 10:35:12.390675  897600 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1007 10:35:12.390698  897600 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1007 10:35:12.411513  897600 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 10:35:12.411533  897600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1007 10:35:12.432763  897600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 10:35:12.691663  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:12.758252  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:12.759029  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:12.946840  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:13.111159  897600 addons.go:475] Verifying addon gcp-auth=true in "addons-952725"
	I1007 10:35:13.113394  897600 out.go:177] * Verifying gcp-auth addon...
	I1007 10:35:13.116487  897600 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1007 10:35:13.119702  897600 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1007 10:35:13.119723  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:13.192603  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:13.256809  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:13.257747  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:13.620606  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:13.691487  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:13.755605  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:13.756775  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:14.120100  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:14.190992  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:14.255818  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:14.256883  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:14.620582  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:14.690952  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:14.755471  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:14.756463  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:15.120597  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:15.192926  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:15.256178  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:15.257136  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:15.444976  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:15.619980  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:15.690837  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:15.755244  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:15.755928  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:16.120603  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:16.191512  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:16.255250  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:16.256005  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:16.620336  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:16.690606  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:16.755468  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:16.757117  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:17.119517  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:17.192605  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:17.255937  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:17.256270  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:17.621005  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:17.690927  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:17.755359  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:17.756052  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:17.944383  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:18.119729  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:18.191575  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:18.255118  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:18.255806  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:18.619683  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:18.691159  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:18.755966  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:18.756453  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:19.119968  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:19.191378  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:19.254984  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:19.256050  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:19.620195  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:19.691360  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:19.755009  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:19.755810  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:20.119987  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:20.191151  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:20.254722  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:20.255393  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:20.443826  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:20.620328  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:20.690488  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:20.755250  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:20.755942  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:21.119891  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:21.190982  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:21.255407  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:21.255842  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:21.620175  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:21.690600  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:21.755382  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:21.756836  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:22.119501  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:22.191844  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:22.255316  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:22.256367  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:22.444925  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:22.619718  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:22.690635  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:22.755811  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:22.756561  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:23.119705  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:23.192162  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:23.255519  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:23.255872  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:23.620757  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:23.691758  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:23.755520  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:23.756427  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:24.120513  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:24.192086  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:24.256295  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:24.256686  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:24.620116  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:24.691382  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:24.755043  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:24.755951  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:24.944376  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:25.120456  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:25.191982  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:25.255098  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:25.255804  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:25.619905  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:25.691142  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:25.754831  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:25.755802  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:26.119571  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:26.196506  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:26.255701  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:26.255919  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:26.620804  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:26.691121  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:26.758966  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:26.760279  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:26.944889  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:27.120045  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:27.191052  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:27.255586  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:27.256503  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:27.620151  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:27.691250  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:27.755085  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:27.755974  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:28.120761  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:28.191820  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:28.257187  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:28.258995  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:28.620227  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:28.691090  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:28.755198  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:28.755981  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:29.120457  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:29.191919  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:29.254631  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:29.255456  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:29.444477  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:29.620919  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:29.690895  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:29.755819  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:29.756584  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:30.121124  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:30.190834  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:30.256047  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:30.256366  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:30.620035  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:30.691509  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:30.755191  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:30.756316  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:31.120532  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:31.191673  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:31.255118  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:31.256003  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:31.620057  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:31.691374  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:31.754961  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:31.755839  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:31.944435  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:32.120072  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:32.192527  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:32.255012  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:32.255833  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:32.619735  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:32.691148  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:32.754877  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:32.755735  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:33.119787  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:33.193216  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:33.254954  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:33.256083  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:33.620438  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:33.691119  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:33.755672  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:33.756450  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:33.944655  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:34.120692  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:34.192034  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:34.255673  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:34.256037  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:34.619497  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:34.690310  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:34.755281  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:34.756852  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:35.120378  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:35.191733  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:35.254498  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:35.255387  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:35.619412  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:35.691028  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:35.755504  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:35.756209  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:36.120483  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:36.191192  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:36.255967  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:36.256182  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:36.444124  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:36.620053  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:36.721319  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:36.756145  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:36.756893  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:37.119905  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:37.191784  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:37.254501  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:37.255324  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:37.628685  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:37.693437  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:37.755728  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:37.756994  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:38.120487  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:38.191546  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:38.255676  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:38.256728  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:38.444446  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:38.620386  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:38.690692  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:38.755738  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:38.756669  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:39.119804  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:39.190963  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:39.254542  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:39.255341  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:39.620305  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:39.691312  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:39.755717  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:39.755914  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:40.120417  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:40.191998  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:40.256167  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:40.256688  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:40.620044  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:40.691277  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:40.754201  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:40.755358  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:40.944035  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:41.120077  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:41.192620  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:41.257544  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:41.258405  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:41.619956  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:41.691181  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:41.755424  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:41.756305  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:42.121511  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:42.193451  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:42.256214  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:42.257190  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:42.619714  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:42.691138  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:42.755621  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:42.755666  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:43.120030  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:43.192505  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:43.255509  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:43.255572  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:43.445032  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:43.620043  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:43.690901  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:43.755257  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:43.755821  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:44.120394  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:44.192294  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:44.254643  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:44.256031  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:44.620087  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:44.691543  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:44.755815  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:44.756615  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:45.123135  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:45.197595  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:45.256190  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:45.257001  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:45.445510  897600 node_ready.go:53] node "addons-952725" has status "Ready":"False"
	I1007 10:35:45.619979  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:45.690952  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:45.760639  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:45.803827  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:45.954763  897600 node_ready.go:49] node "addons-952725" has status "Ready":"True"
	I1007 10:35:45.954837  897600 node_ready.go:38] duration metric: took 40.014076963s for node "addons-952725" to be "Ready" ...
	I1007 10:35:45.954862  897600 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:35:45.967201  897600 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6b52m" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:46.131084  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:46.267250  897600 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 10:35:46.267324  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:46.310010  897600 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 10:35:46.310163  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:46.310414  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:46.623664  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:46.728968  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:46.808033  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:46.808162  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:46.974217  897600 pod_ready.go:93] pod "coredns-7c65d6cfc9-6b52m" in "kube-system" namespace has status "Ready":"True"
	I1007 10:35:46.974238  897600 pod_ready.go:82] duration metric: took 1.006959941s for pod "coredns-7c65d6cfc9-6b52m" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:46.974261  897600 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-952725" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:46.979966  897600 pod_ready.go:93] pod "etcd-addons-952725" in "kube-system" namespace has status "Ready":"True"
	I1007 10:35:46.980036  897600 pod_ready.go:82] duration metric: took 5.766704ms for pod "etcd-addons-952725" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:46.980073  897600 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-952725" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:46.988435  897600 pod_ready.go:93] pod "kube-apiserver-addons-952725" in "kube-system" namespace has status "Ready":"True"
	I1007 10:35:46.988505  897600 pod_ready.go:82] duration metric: took 8.409562ms for pod "kube-apiserver-addons-952725" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:46.988531  897600 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-952725" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:46.995460  897600 pod_ready.go:93] pod "kube-controller-manager-addons-952725" in "kube-system" namespace has status "Ready":"True"
	I1007 10:35:46.995530  897600 pod_ready.go:82] duration metric: took 6.970321ms for pod "kube-controller-manager-addons-952725" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:46.995558  897600 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9dvhw" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:47.130323  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:47.146870  897600 pod_ready.go:93] pod "kube-proxy-9dvhw" in "kube-system" namespace has status "Ready":"True"
	I1007 10:35:47.146942  897600 pod_ready.go:82] duration metric: took 151.362352ms for pod "kube-proxy-9dvhw" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:47.146972  897600 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-952725" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:47.226598  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:47.255699  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:47.256410  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:47.550393  897600 pod_ready.go:93] pod "kube-scheduler-addons-952725" in "kube-system" namespace has status "Ready":"True"
	I1007 10:35:47.550421  897600 pod_ready.go:82] duration metric: took 403.42634ms for pod "kube-scheduler-addons-952725" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:47.550433  897600 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace to be "Ready" ...
	I1007 10:35:47.620092  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:47.722085  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:47.756130  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:47.757088  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:48.121233  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:48.197219  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:48.255986  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:48.258019  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:48.620846  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:48.723760  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:48.757851  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:48.758871  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:49.125829  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:49.195058  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:49.259063  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:49.260493  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:49.562490  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:35:49.624943  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:49.694868  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:49.758809  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:49.760521  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:50.121387  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:50.195276  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:50.259359  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:50.261650  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:50.623481  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:50.695172  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:50.758130  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:50.759920  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:51.135736  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:51.201723  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:51.258129  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:51.260394  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:51.623263  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:51.695999  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:51.761490  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:51.763003  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:52.056970  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:35:52.121263  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:52.222768  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:52.256069  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:52.256979  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:52.620464  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:52.692213  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:52.756741  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:52.757107  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:53.120617  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:53.193206  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:53.256466  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:53.257114  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:53.621091  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:53.692955  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:53.756172  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:53.757086  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:54.058047  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:35:54.119905  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:54.192789  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:54.257499  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:54.257803  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:54.620491  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:54.693275  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:54.763293  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:54.764528  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:55.121439  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:55.193547  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:55.260130  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:55.261469  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:55.620683  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:55.692986  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:55.756887  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:55.757385  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:56.121169  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:56.193777  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:56.257642  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:56.258400  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:56.557330  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:35:56.622515  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:56.692997  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:56.756053  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:56.756816  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:57.120759  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:57.194163  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:57.255667  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:57.256426  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:57.621459  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:57.697848  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:57.778608  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:57.779360  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:58.120365  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:58.193773  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:58.257015  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:58.257978  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:58.620988  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:58.693248  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:58.756341  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:58.757601  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:59.057328  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:35:59.122386  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:59.194061  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:59.257078  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:59.257639  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:35:59.625474  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:35:59.696349  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:35:59.758227  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:35:59.760817  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:00.138236  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:00.266427  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:00.373549  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:00.393821  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:00.621528  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:00.693551  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:00.757398  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:00.758832  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:01.121497  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:01.223837  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:01.256097  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:01.257046  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:01.557170  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:01.620146  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:01.692315  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:01.756457  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:01.757384  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:02.120530  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:02.192537  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:02.256441  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:02.257205  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:02.620575  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:02.691966  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:02.757661  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:02.759096  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:03.120833  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:03.195054  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:03.257723  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:03.259916  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:03.559281  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:03.630899  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:03.693054  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:03.756939  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:03.758589  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:04.120217  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:04.191809  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:04.258440  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:04.259250  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:04.621066  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:04.692994  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:04.757813  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:04.758793  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:05.123837  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:05.192997  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:05.257190  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:05.258719  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:05.620429  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:05.692091  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:05.756781  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:05.759620  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:06.061233  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:06.125816  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:06.223764  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:06.256890  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:06.257100  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:06.621286  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:06.694035  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:06.755722  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:06.757125  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:07.121276  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:07.222844  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:07.256796  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:07.257516  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:07.621750  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:07.692311  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:07.755698  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:07.758308  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:08.120124  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:08.193283  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:08.256637  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:08.257428  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:08.602063  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:08.626477  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:08.692923  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:08.757057  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:08.759065  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:09.120448  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:09.203017  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:09.257440  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:09.258354  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:09.620946  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:09.691975  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:09.756334  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:09.756623  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:10.120732  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:10.204811  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:10.255966  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:10.256934  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:10.619963  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:10.692649  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:10.755930  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:10.756716  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:11.057897  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:11.120671  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:11.203726  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:11.265477  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:11.266801  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:11.620344  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:11.695704  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:11.756591  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:11.758608  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:12.120970  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:12.193725  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:12.256133  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:12.257249  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:12.620551  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:12.691560  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:12.756792  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:12.757868  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:13.120907  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:13.192492  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:13.256494  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:13.256940  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:13.560170  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:13.622678  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:13.692825  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:13.756668  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:13.757685  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:14.120019  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:14.194422  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:14.294068  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:14.295874  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:14.620758  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:14.692984  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:14.754863  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:14.756559  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:15.121932  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:15.221910  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:15.256819  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:15.257999  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:15.620632  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:15.693528  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:15.756480  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:15.756807  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:16.057207  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:16.123437  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:16.233871  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:16.281986  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:16.282520  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:16.621043  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:16.692619  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:16.755211  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:16.756172  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:17.120299  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:17.192047  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:17.257387  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:17.261722  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:17.621312  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:17.692848  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:17.757740  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:17.759588  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:18.059632  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:18.120437  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:18.193952  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:18.297191  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:18.298108  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:18.620592  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:18.691873  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:18.756605  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:18.758236  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:19.121822  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:19.222786  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:19.257853  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:19.259401  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:19.622863  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:19.692777  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:19.757722  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:19.757855  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:20.059736  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:20.120415  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:20.192636  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:20.258119  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:20.258300  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:20.620425  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:20.692121  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:20.757948  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:20.758510  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:21.121078  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:21.196990  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:21.262762  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:21.263812  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:21.620998  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:21.695405  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:21.759154  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:21.761213  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:22.121238  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:22.193002  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:22.255903  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:22.257985  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:22.557532  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:22.619854  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:22.691825  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:22.755977  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:22.757311  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:23.120045  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:23.192714  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:23.256618  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:23.257867  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:23.621784  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:23.692493  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:23.755754  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:23.756013  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:24.120997  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:24.199341  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:24.261855  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:24.263649  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:24.567589  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:24.620347  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:24.693034  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:24.758553  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:24.759823  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:25.121284  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:25.192482  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:25.258401  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:25.259288  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:25.620881  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:25.693052  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:25.758665  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:25.759881  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:26.122172  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:26.193352  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:26.256373  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:26.257010  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:26.620981  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:26.692424  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:26.760691  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:26.762090  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:27.065897  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:27.120699  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:27.200876  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:27.256494  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:27.258942  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:27.621022  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:27.693005  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:27.756608  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:27.758618  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:28.124027  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:28.227304  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:28.261286  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:28.262057  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:28.622231  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:28.692944  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:28.757021  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:28.759390  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:29.121289  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:29.192304  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:29.258422  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:29.259892  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:29.557547  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:29.620863  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:29.692586  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:29.758755  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:29.759898  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:30.122227  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:30.194325  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:30.268582  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:30.269842  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:30.620438  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:30.692514  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:30.758090  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:30.760515  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:31.120944  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:31.191971  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:31.256391  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:31.257454  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:31.620766  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:31.691829  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:31.756102  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:31.757298  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:32.056766  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:32.120556  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:32.192341  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:32.255792  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:32.258022  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:32.620006  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:32.693405  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:32.767048  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:32.770238  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:33.120954  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:33.193114  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:33.260266  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:33.262668  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:33.621503  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:33.698675  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:33.760149  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:33.761745  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:34.059139  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:34.120283  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:34.192365  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:34.261454  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:34.262491  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:34.620004  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:34.692101  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:34.762763  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:34.763412  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 10:36:35.121079  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:35.192159  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:35.256595  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:35.257051  897600 kapi.go:107] duration metric: took 1m26.505736633s to wait for kubernetes.io/minikube-addons=registry ...
	I1007 10:36:35.621537  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:35.693264  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:35.757410  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:36.061799  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:36.122155  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:36.223909  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:36.256432  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:36.627260  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:36.692268  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:36.756392  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:37.127777  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:37.231344  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:37.256480  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:37.621259  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:37.692200  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:37.757320  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:38.120381  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:38.192366  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:38.256155  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:38.558309  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:38.621085  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:38.692908  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:38.755860  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:39.120908  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:39.192235  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:39.255772  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:39.620405  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:39.692368  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:39.755794  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:40.120388  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:40.193016  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:40.258038  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:40.567757  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:40.621083  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:40.722495  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:40.755723  897600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 10:36:41.120827  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:41.192100  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:41.256769  897600 kapi.go:107] duration metric: took 1m32.505411601s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1007 10:36:41.620061  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:41.692679  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:42.134403  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:42.201481  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:42.624335  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:42.725782  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:43.056878  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:43.120323  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:43.192468  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:43.620202  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:43.693539  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:44.120162  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:44.191589  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:44.627461  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:44.691876  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:45.120394  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:45.125783  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:45.193867  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:45.620955  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:45.692911  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:46.120287  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:46.194226  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:46.620153  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 10:36:46.722476  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:47.120366  897600 kapi.go:107] duration metric: took 1m34.003877783s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1007 10:36:47.122614  897600 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-952725 cluster.
	I1007 10:36:47.124380  897600 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1007 10:36:47.126016  897600 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1007 10:36:47.192217  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:47.556277  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:47.692900  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:48.195047  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:48.693120  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:49.193204  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:49.558005  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:49.693026  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:50.192314  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:50.692782  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:51.192448  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:51.692991  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:52.059957  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:52.195170  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:52.693081  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:53.192988  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:53.692325  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:54.192334  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:54.562255  897600 pod_ready.go:103] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"False"
	I1007 10:36:54.693068  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:55.193485  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:55.691776  897600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 10:36:56.219672  897600 kapi.go:107] duration metric: took 1m47.032405778s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1007 10:36:56.221946  897600 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1007 10:36:56.223744  897600 addons.go:510] duration metric: took 1m53.697824035s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1007 10:36:56.556979  897600 pod_ready.go:93] pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace has status "Ready":"True"
	I1007 10:36:56.557006  897600 pod_ready.go:82] duration metric: took 1m9.006564427s for pod "metrics-server-84c5f94fbc-6vc27" in "kube-system" namespace to be "Ready" ...
	I1007 10:36:56.557018  897600 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fmzt5" in "kube-system" namespace to be "Ready" ...
	I1007 10:36:56.562379  897600 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-fmzt5" in "kube-system" namespace has status "Ready":"True"
	I1007 10:36:56.562402  897600 pod_ready.go:82] duration metric: took 5.376408ms for pod "nvidia-device-plugin-daemonset-fmzt5" in "kube-system" namespace to be "Ready" ...
	I1007 10:36:56.562421  897600 pod_ready.go:39] duration metric: took 1m10.607532851s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 10:36:56.562436  897600 api_server.go:52] waiting for apiserver process to appear ...
	I1007 10:36:56.562471  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 10:36:56.562536  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 10:36:56.650122  897600 cri.go:89] found id: "a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a"
	I1007 10:36:56.650146  897600 cri.go:89] found id: ""
	I1007 10:36:56.650155  897600 logs.go:282] 1 containers: [a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a]
	I1007 10:36:56.650212  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:36:56.654573  897600 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 10:36:56.654650  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 10:36:56.698951  897600 cri.go:89] found id: "db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e"
	I1007 10:36:56.698974  897600 cri.go:89] found id: ""
	I1007 10:36:56.698982  897600 logs.go:282] 1 containers: [db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e]
	I1007 10:36:56.699037  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:36:56.702707  897600 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 10:36:56.702791  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 10:36:56.743359  897600 cri.go:89] found id: "dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264"
	I1007 10:36:56.743385  897600 cri.go:89] found id: ""
	I1007 10:36:56.743395  897600 logs.go:282] 1 containers: [dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264]
	I1007 10:36:56.743453  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:36:56.748039  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 10:36:56.748118  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 10:36:56.793981  897600 cri.go:89] found id: "21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e"
	I1007 10:36:56.794003  897600 cri.go:89] found id: ""
	I1007 10:36:56.794011  897600 logs.go:282] 1 containers: [21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e]
	I1007 10:36:56.794071  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:36:56.797745  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 10:36:56.797829  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 10:36:56.836301  897600 cri.go:89] found id: "9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354"
	I1007 10:36:56.836326  897600 cri.go:89] found id: ""
	I1007 10:36:56.836335  897600 logs.go:282] 1 containers: [9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354]
	I1007 10:36:56.836396  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:36:56.839818  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 10:36:56.839893  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 10:36:56.877706  897600 cri.go:89] found id: "0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0"
	I1007 10:36:56.877729  897600 cri.go:89] found id: ""
	I1007 10:36:56.877738  897600 logs.go:282] 1 containers: [0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0]
	I1007 10:36:56.877815  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:36:56.881381  897600 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 10:36:56.881468  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 10:36:56.921067  897600 cri.go:89] found id: "a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273"
	I1007 10:36:56.921089  897600 cri.go:89] found id: ""
	I1007 10:36:56.921098  897600 logs.go:282] 1 containers: [a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273]
	I1007 10:36:56.921154  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:36:56.924603  897600 logs.go:123] Gathering logs for kube-apiserver [a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a] ...
	I1007 10:36:56.924630  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a"
	I1007 10:36:57.006047  897600 logs.go:123] Gathering logs for coredns [dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264] ...
	I1007 10:36:57.006096  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264"
	I1007 10:36:57.057018  897600 logs.go:123] Gathering logs for describe nodes ...
	I1007 10:36:57.057056  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 10:36:57.249707  897600 logs.go:123] Gathering logs for dmesg ...
	I1007 10:36:57.249739  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 10:36:57.268267  897600 logs.go:123] Gathering logs for etcd [db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e] ...
	I1007 10:36:57.268298  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e"
	I1007 10:36:57.328214  897600 logs.go:123] Gathering logs for kube-scheduler [21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e] ...
	I1007 10:36:57.328421  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e"
	I1007 10:36:57.397626  897600 logs.go:123] Gathering logs for kube-proxy [9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354] ...
	I1007 10:36:57.397660  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354"
	I1007 10:36:57.462539  897600 logs.go:123] Gathering logs for kube-controller-manager [0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0] ...
	I1007 10:36:57.462568  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0"
	I1007 10:36:57.541360  897600 logs.go:123] Gathering logs for kindnet [a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273] ...
	I1007 10:36:57.541401  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273"
	I1007 10:36:57.581279  897600 logs.go:123] Gathering logs for CRI-O ...
	I1007 10:36:57.581305  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 10:36:57.680992  897600 logs.go:123] Gathering logs for kubelet ...
	I1007 10:36:57.681032  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 10:36:57.752399  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.866281    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-952725" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-952725' and this object
	W1007 10:36:57.752648  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866332    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:36:57.752822  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.866470    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-952725' and this object
	W1007 10:36:57.753041  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866491    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:36:57.753212  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.899091    1502 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-952725' and this object
	W1007 10:36:57.753417  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.899139    1502 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	I1007 10:36:57.789307  897600 logs.go:123] Gathering logs for container status ...
	I1007 10:36:57.789337  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 10:36:57.863682  897600 out.go:358] Setting ErrFile to fd 2...
	I1007 10:36:57.863710  897600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 10:36:57.863788  897600 out.go:270] X Problems detected in kubelet:
	W1007 10:36:57.863847  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866332    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:36:57.863864  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.866470    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-952725' and this object
	W1007 10:36:57.863891  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866491    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:36:57.863899  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.899091    1502 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-952725' and this object
	W1007 10:36:57.863907  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.899139    1502 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	I1007 10:36:57.863915  897600 out.go:358] Setting ErrFile to fd 2...
	I1007 10:36:57.863924  897600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:37:07.864673  897600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 10:37:07.880027  897600 api_server.go:72] duration metric: took 2m5.354347398s to wait for apiserver process to appear ...
	I1007 10:37:07.880057  897600 api_server.go:88] waiting for apiserver healthz status ...
	I1007 10:37:07.880097  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 10:37:07.880167  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 10:37:07.927054  897600 cri.go:89] found id: "a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a"
	I1007 10:37:07.927079  897600 cri.go:89] found id: ""
	I1007 10:37:07.927089  897600 logs.go:282] 1 containers: [a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a]
	I1007 10:37:07.927147  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:07.930899  897600 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 10:37:07.930979  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 10:37:07.975788  897600 cri.go:89] found id: "db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e"
	I1007 10:37:07.975814  897600 cri.go:89] found id: ""
	I1007 10:37:07.975824  897600 logs.go:282] 1 containers: [db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e]
	I1007 10:37:07.975881  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:07.979490  897600 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 10:37:07.979565  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 10:37:08.022892  897600 cri.go:89] found id: "dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264"
	I1007 10:37:08.022919  897600 cri.go:89] found id: ""
	I1007 10:37:08.022928  897600 logs.go:282] 1 containers: [dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264]
	I1007 10:37:08.022996  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:08.027989  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 10:37:08.028070  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 10:37:08.072565  897600 cri.go:89] found id: "21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e"
	I1007 10:37:08.072590  897600 cri.go:89] found id: ""
	I1007 10:37:08.072600  897600 logs.go:282] 1 containers: [21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e]
	I1007 10:37:08.072666  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:08.076407  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 10:37:08.076484  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 10:37:08.118844  897600 cri.go:89] found id: "9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354"
	I1007 10:37:08.118868  897600 cri.go:89] found id: ""
	I1007 10:37:08.118876  897600 logs.go:282] 1 containers: [9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354]
	I1007 10:37:08.118933  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:08.122839  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 10:37:08.122914  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 10:37:08.162329  897600 cri.go:89] found id: "0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0"
	I1007 10:37:08.162353  897600 cri.go:89] found id: ""
	I1007 10:37:08.162362  897600 logs.go:282] 1 containers: [0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0]
	I1007 10:37:08.162423  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:08.165980  897600 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 10:37:08.166049  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 10:37:08.203439  897600 cri.go:89] found id: "a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273"
	I1007 10:37:08.203460  897600 cri.go:89] found id: ""
	I1007 10:37:08.203469  897600 logs.go:282] 1 containers: [a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273]
	I1007 10:37:08.203528  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:08.207017  897600 logs.go:123] Gathering logs for kubelet ...
	I1007 10:37:08.207043  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 10:37:08.271560  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.866281    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-952725" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-952725' and this object
	W1007 10:37:08.271835  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866332    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:37:08.272016  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.866470    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-952725' and this object
	W1007 10:37:08.272227  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866491    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:37:08.272448  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.899091    1502 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-952725' and this object
	W1007 10:37:08.272656  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.899139    1502 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	I1007 10:37:08.309193  897600 logs.go:123] Gathering logs for kube-scheduler [21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e] ...
	I1007 10:37:08.309229  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e"
	I1007 10:37:08.363518  897600 logs.go:123] Gathering logs for kube-proxy [9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354] ...
	I1007 10:37:08.363549  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354"
	I1007 10:37:08.406239  897600 logs.go:123] Gathering logs for CRI-O ...
	I1007 10:37:08.406265  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 10:37:08.509108  897600 logs.go:123] Gathering logs for container status ...
	I1007 10:37:08.509186  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 10:37:08.611997  897600 logs.go:123] Gathering logs for dmesg ...
	I1007 10:37:08.612034  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 10:37:08.628856  897600 logs.go:123] Gathering logs for describe nodes ...
	I1007 10:37:08.628887  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 10:37:08.763044  897600 logs.go:123] Gathering logs for kube-apiserver [a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a] ...
	I1007 10:37:08.763078  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a"
	I1007 10:37:08.816725  897600 logs.go:123] Gathering logs for etcd [db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e] ...
	I1007 10:37:08.816803  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e"
	I1007 10:37:08.874013  897600 logs.go:123] Gathering logs for coredns [dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264] ...
	I1007 10:37:08.874048  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264"
	I1007 10:37:08.914165  897600 logs.go:123] Gathering logs for kube-controller-manager [0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0] ...
	I1007 10:37:08.914197  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0"
	I1007 10:37:09.007499  897600 logs.go:123] Gathering logs for kindnet [a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273] ...
	I1007 10:37:09.007549  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273"
	I1007 10:37:09.053760  897600 out.go:358] Setting ErrFile to fd 2...
	I1007 10:37:09.053792  897600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 10:37:09.053847  897600 out.go:270] X Problems detected in kubelet:
	W1007 10:37:09.053861  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866332    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:37:09.053868  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.866470    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-952725' and this object
	W1007 10:37:09.053893  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866491    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:37:09.053906  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.899091    1502 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-952725' and this object
	W1007 10:37:09.053912  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.899139    1502 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	I1007 10:37:09.053918  897600 out.go:358] Setting ErrFile to fd 2...
	I1007 10:37:09.053931  897600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:37:19.056068  897600 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 10:37:19.064312  897600 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1007 10:37:19.065402  897600 api_server.go:141] control plane version: v1.31.1
	I1007 10:37:19.065427  897600 api_server.go:131] duration metric: took 11.185361694s to wait for apiserver health ...
	I1007 10:37:19.065448  897600 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 10:37:19.065471  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 10:37:19.065544  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 10:37:19.113004  897600 cri.go:89] found id: "a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a"
	I1007 10:37:19.113025  897600 cri.go:89] found id: ""
	I1007 10:37:19.113033  897600 logs.go:282] 1 containers: [a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a]
	I1007 10:37:19.113088  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:19.116801  897600 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 10:37:19.116880  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 10:37:19.159114  897600 cri.go:89] found id: "db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e"
	I1007 10:37:19.159138  897600 cri.go:89] found id: ""
	I1007 10:37:19.159146  897600 logs.go:282] 1 containers: [db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e]
	I1007 10:37:19.159208  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:19.162554  897600 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 10:37:19.162657  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 10:37:19.200865  897600 cri.go:89] found id: "dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264"
	I1007 10:37:19.200890  897600 cri.go:89] found id: ""
	I1007 10:37:19.200899  897600 logs.go:282] 1 containers: [dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264]
	I1007 10:37:19.200980  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:19.204646  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 10:37:19.204757  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 10:37:19.244869  897600 cri.go:89] found id: "21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e"
	I1007 10:37:19.244943  897600 cri.go:89] found id: ""
	I1007 10:37:19.244958  897600 logs.go:282] 1 containers: [21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e]
	I1007 10:37:19.245032  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:19.248688  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 10:37:19.248788  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 10:37:19.289715  897600 cri.go:89] found id: "9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354"
	I1007 10:37:19.289786  897600 cri.go:89] found id: ""
	I1007 10:37:19.289810  897600 logs.go:282] 1 containers: [9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354]
	I1007 10:37:19.289888  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:19.293553  897600 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 10:37:19.293626  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 10:37:19.332066  897600 cri.go:89] found id: "0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0"
	I1007 10:37:19.332135  897600 cri.go:89] found id: ""
	I1007 10:37:19.332146  897600 logs.go:282] 1 containers: [0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0]
	I1007 10:37:19.332237  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:19.335650  897600 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 10:37:19.335762  897600 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 10:37:19.387104  897600 cri.go:89] found id: "a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273"
	I1007 10:37:19.387127  897600 cri.go:89] found id: ""
	I1007 10:37:19.387135  897600 logs.go:282] 1 containers: [a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273]
	I1007 10:37:19.387204  897600 ssh_runner.go:195] Run: which crictl
	I1007 10:37:19.390737  897600 logs.go:123] Gathering logs for coredns [dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264] ...
	I1007 10:37:19.390798  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264"
	I1007 10:37:19.436352  897600 logs.go:123] Gathering logs for kube-controller-manager [0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0] ...
	I1007 10:37:19.436445  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0"
	I1007 10:37:19.509907  897600 logs.go:123] Gathering logs for container status ...
	I1007 10:37:19.509946  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 10:37:19.566727  897600 logs.go:123] Gathering logs for kube-scheduler [21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e] ...
	I1007 10:37:19.566760  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e"
	I1007 10:37:19.613328  897600 logs.go:123] Gathering logs for kube-proxy [9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354] ...
	I1007 10:37:19.613356  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354"
	I1007 10:37:19.650758  897600 logs.go:123] Gathering logs for kindnet [a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273] ...
	I1007 10:37:19.650786  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273"
	I1007 10:37:19.692504  897600 logs.go:123] Gathering logs for kubelet ...
	I1007 10:37:19.692574  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 10:37:19.766253  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.866281    1502 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-952725" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-952725' and this object
	W1007 10:37:19.766497  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866332    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:37:19.766672  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.866470    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-952725' and this object
	W1007 10:37:19.766884  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866491    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:37:19.767051  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.899091    1502 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-952725' and this object
	W1007 10:37:19.767255  897600 logs.go:138] Found kubelet problem: Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.899139    1502 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	I1007 10:37:19.804156  897600 logs.go:123] Gathering logs for dmesg ...
	I1007 10:37:19.804186  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 10:37:19.821325  897600 logs.go:123] Gathering logs for describe nodes ...
	I1007 10:37:19.821353  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 10:37:19.956686  897600 logs.go:123] Gathering logs for kube-apiserver [a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a] ...
	I1007 10:37:19.956717  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a"
	I1007 10:37:20.021629  897600 logs.go:123] Gathering logs for etcd [db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e] ...
	I1007 10:37:20.021676  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e"
	I1007 10:37:20.070025  897600 logs.go:123] Gathering logs for CRI-O ...
	I1007 10:37:20.070065  897600 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 10:37:20.171276  897600 out.go:358] Setting ErrFile to fd 2...
	I1007 10:37:20.171311  897600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 10:37:20.171372  897600 out.go:270] X Problems detected in kubelet:
	W1007 10:37:20.171385  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866332    1502 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:37:20.171400  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.866470    1502 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-952725' and this object
	W1007 10:37:20.171408  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.866491    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	W1007 10:37:20.171420  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: W1007 10:35:45.899091    1502 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-952725" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-952725' and this object
	W1007 10:37:20.171427  897600 out.go:270]   Oct 07 10:35:45 addons-952725 kubelet[1502]: E1007 10:35:45.899139    1502 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-952725\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-952725' and this object" logger="UnhandledError"
	I1007 10:37:20.171435  897600 out.go:358] Setting ErrFile to fd 2...
	I1007 10:37:20.171446  897600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:37:30.183789  897600 system_pods.go:59] 18 kube-system pods found
	I1007 10:37:30.183838  897600 system_pods.go:61] "coredns-7c65d6cfc9-6b52m" [ed7fc72d-beca-46bb-b394-215642359254] Running
	I1007 10:37:30.183847  897600 system_pods.go:61] "csi-hostpath-attacher-0" [b45f53e6-4940-4b1c-ae02-70ddf7230f58] Running
	I1007 10:37:30.183853  897600 system_pods.go:61] "csi-hostpath-resizer-0" [d17c3c07-b028-47ca-9ba4-df73aa7b84af] Running
	I1007 10:37:30.183857  897600 system_pods.go:61] "csi-hostpathplugin-rddzj" [9e9d3c3a-bf48-4754-9099-da157ca7adc5] Running
	I1007 10:37:30.183861  897600 system_pods.go:61] "etcd-addons-952725" [e7d6ae0d-33ca-4dc4-adb0-33ccc4d5c466] Running
	I1007 10:37:30.183865  897600 system_pods.go:61] "kindnet-57hzc" [31cd7dce-84b2-4be6-89c3-b529c0945b92] Running
	I1007 10:37:30.183869  897600 system_pods.go:61] "kube-apiserver-addons-952725" [174e4f2a-9fae-42fc-bdbc-7d90072c4d6f] Running
	I1007 10:37:30.183873  897600 system_pods.go:61] "kube-controller-manager-addons-952725" [938c6601-5924-451b-9e0f-418459f50c91] Running
	I1007 10:37:30.183879  897600 system_pods.go:61] "kube-ingress-dns-minikube" [cd331aef-a9f6-4aef-8e5c-4f873434c12f] Running
	I1007 10:37:30.183883  897600 system_pods.go:61] "kube-proxy-9dvhw" [8a3c0df3-9298-4f23-a7ca-9732011063aa] Running
	I1007 10:37:30.183887  897600 system_pods.go:61] "kube-scheduler-addons-952725" [ad1890e4-1d3f-470f-9a9f-e46e135a4547] Running
	I1007 10:37:30.183891  897600 system_pods.go:61] "metrics-server-84c5f94fbc-6vc27" [72e58343-87c1-4934-82b9-e0757b74087f] Running
	I1007 10:37:30.183896  897600 system_pods.go:61] "nvidia-device-plugin-daemonset-fmzt5" [a9637f8d-dccb-461d-97b2-a4f5108a27d6] Running
	I1007 10:37:30.183900  897600 system_pods.go:61] "registry-66c9cd494c-ckskn" [d430f6b1-cd25-4f9f-aa81-282aa63589cf] Running
	I1007 10:37:30.183904  897600 system_pods.go:61] "registry-proxy-lxfj2" [0a453af2-0cb5-4656-8c53-e0132b6c6cfc] Running
	I1007 10:37:30.183909  897600 system_pods.go:61] "snapshot-controller-56fcc65765-6nkvg" [f0a03f47-231c-49a7-995c-fa8932f24be8] Running
	I1007 10:37:30.183914  897600 system_pods.go:61] "snapshot-controller-56fcc65765-c5p5m" [5918748e-2aa0-4ef1-83d2-0de4f14da617] Running
	I1007 10:37:30.183918  897600 system_pods.go:61] "storage-provisioner" [0021a05c-7712-4570-8a15-b7911ca5e125] Running
	I1007 10:37:30.183933  897600 system_pods.go:74] duration metric: took 11.118470081s to wait for pod list to return data ...
	I1007 10:37:30.183942  897600 default_sa.go:34] waiting for default service account to be created ...
	I1007 10:37:30.187277  897600 default_sa.go:45] found service account: "default"
	I1007 10:37:30.187312  897600 default_sa.go:55] duration metric: took 3.3625ms for default service account to be created ...
	I1007 10:37:30.187323  897600 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 10:37:30.198035  897600 system_pods.go:86] 18 kube-system pods found
	I1007 10:37:30.198073  897600 system_pods.go:89] "coredns-7c65d6cfc9-6b52m" [ed7fc72d-beca-46bb-b394-215642359254] Running
	I1007 10:37:30.198084  897600 system_pods.go:89] "csi-hostpath-attacher-0" [b45f53e6-4940-4b1c-ae02-70ddf7230f58] Running
	I1007 10:37:30.198090  897600 system_pods.go:89] "csi-hostpath-resizer-0" [d17c3c07-b028-47ca-9ba4-df73aa7b84af] Running
	I1007 10:37:30.198094  897600 system_pods.go:89] "csi-hostpathplugin-rddzj" [9e9d3c3a-bf48-4754-9099-da157ca7adc5] Running
	I1007 10:37:30.198100  897600 system_pods.go:89] "etcd-addons-952725" [e7d6ae0d-33ca-4dc4-adb0-33ccc4d5c466] Running
	I1007 10:37:30.198107  897600 system_pods.go:89] "kindnet-57hzc" [31cd7dce-84b2-4be6-89c3-b529c0945b92] Running
	I1007 10:37:30.198112  897600 system_pods.go:89] "kube-apiserver-addons-952725" [174e4f2a-9fae-42fc-bdbc-7d90072c4d6f] Running
	I1007 10:37:30.198117  897600 system_pods.go:89] "kube-controller-manager-addons-952725" [938c6601-5924-451b-9e0f-418459f50c91] Running
	I1007 10:37:30.198124  897600 system_pods.go:89] "kube-ingress-dns-minikube" [cd331aef-a9f6-4aef-8e5c-4f873434c12f] Running
	I1007 10:37:30.198129  897600 system_pods.go:89] "kube-proxy-9dvhw" [8a3c0df3-9298-4f23-a7ca-9732011063aa] Running
	I1007 10:37:30.198134  897600 system_pods.go:89] "kube-scheduler-addons-952725" [ad1890e4-1d3f-470f-9a9f-e46e135a4547] Running
	I1007 10:37:30.198147  897600 system_pods.go:89] "metrics-server-84c5f94fbc-6vc27" [72e58343-87c1-4934-82b9-e0757b74087f] Running
	I1007 10:37:30.198152  897600 system_pods.go:89] "nvidia-device-plugin-daemonset-fmzt5" [a9637f8d-dccb-461d-97b2-a4f5108a27d6] Running
	I1007 10:37:30.198156  897600 system_pods.go:89] "registry-66c9cd494c-ckskn" [d430f6b1-cd25-4f9f-aa81-282aa63589cf] Running
	I1007 10:37:30.198163  897600 system_pods.go:89] "registry-proxy-lxfj2" [0a453af2-0cb5-4656-8c53-e0132b6c6cfc] Running
	I1007 10:37:30.198168  897600 system_pods.go:89] "snapshot-controller-56fcc65765-6nkvg" [f0a03f47-231c-49a7-995c-fa8932f24be8] Running
	I1007 10:37:30.198172  897600 system_pods.go:89] "snapshot-controller-56fcc65765-c5p5m" [5918748e-2aa0-4ef1-83d2-0de4f14da617] Running
	I1007 10:37:30.198178  897600 system_pods.go:89] "storage-provisioner" [0021a05c-7712-4570-8a15-b7911ca5e125] Running
	I1007 10:37:30.198187  897600 system_pods.go:126] duration metric: took 10.856782ms to wait for k8s-apps to be running ...
	I1007 10:37:30.198198  897600 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 10:37:30.198258  897600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 10:37:30.213256  897600 system_svc.go:56] duration metric: took 15.047814ms WaitForService to wait for kubelet
	I1007 10:37:30.213288  897600 kubeadm.go:582] duration metric: took 2m27.687613381s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 10:37:30.213310  897600 node_conditions.go:102] verifying NodePressure condition ...
	I1007 10:37:30.217245  897600 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 10:37:30.217284  897600 node_conditions.go:123] node cpu capacity is 2
	I1007 10:37:30.217297  897600 node_conditions.go:105] duration metric: took 3.980603ms to run NodePressure ...
	I1007 10:37:30.217309  897600 start.go:241] waiting for startup goroutines ...
	I1007 10:37:30.217318  897600 start.go:246] waiting for cluster config update ...
	I1007 10:37:30.217334  897600 start.go:255] writing updated cluster config ...
	I1007 10:37:30.217649  897600 ssh_runner.go:195] Run: rm -f paused
	I1007 10:37:30.567611  897600 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 10:37:30.569884  897600 out.go:177] * Done! kubectl is now configured to use "addons-952725" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 10:49:42 addons-952725 crio[959]: time="2024-10-07 10:49:42.995602513Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-vdk9h Namespace:ingress-nginx ID:a9664ef1150342d105dbfba907f6ee427b20b8ecf3d3ff18137ffd8ef70470c7 UID:ee44aacf-b6ae-4bdf-b8f8-c29a0fb16637 NetNS:/var/run/netns/5cca7256-495b-46dc-8be9-b3ab5d586aa1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 07 10:49:42 addons-952725 crio[959]: time="2024-10-07 10:49:42.995741556Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-vdk9h from CNI network \"kindnet\" (type=ptp)"
	Oct 07 10:49:43 addons-952725 crio[959]: time="2024-10-07 10:49:43.014077210Z" level=info msg="Stopped pod sandbox: a9664ef1150342d105dbfba907f6ee427b20b8ecf3d3ff18137ffd8ef70470c7" id=9e8558b0-948b-490f-aa96-80c1a8188259 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 10:49:43 addons-952725 crio[959]: time="2024-10-07 10:49:43.182625551Z" level=info msg="Removing container: 71ab075263c7fcb40a0e8571f9bd2f87e5246ab5fc0e41ec0655593768c2ef57" id=fcbd16b1-b393-4f6a-961e-b3205d126401 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 07 10:49:43 addons-952725 crio[959]: time="2024-10-07 10:49:43.197956886Z" level=info msg="Removed container 71ab075263c7fcb40a0e8571f9bd2f87e5246ab5fc0e41ec0655593768c2ef57: ingress-nginx/ingress-nginx-controller-bc57996ff-vdk9h/controller" id=fcbd16b1-b393-4f6a-961e-b3205d126401 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 07 10:49:57 addons-952725 crio[959]: time="2024-10-07 10:49:57.871744099Z" level=info msg="Removing container: 5e5de2fa02e8e8646f314a34e02f1841ff01ab75c89b8d62fd39460f4ae3255f" id=e65d3b87-50b0-49fb-9093-1d12d40c6567 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 07 10:49:57 addons-952725 crio[959]: time="2024-10-07 10:49:57.887772323Z" level=info msg="Removed container 5e5de2fa02e8e8646f314a34e02f1841ff01ab75c89b8d62fd39460f4ae3255f: ingress-nginx/ingress-nginx-admission-patch-ppdgq/patch" id=e65d3b87-50b0-49fb-9093-1d12d40c6567 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 07 10:49:57 addons-952725 crio[959]: time="2024-10-07 10:49:57.889142988Z" level=info msg="Removing container: a10b2cf1ed463b1cc35ff64b49e1566bcc412399b00fc5fdf87b1394b53c1612" id=f725b101-8c91-4620-b289-5f5a4ad6298f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 07 10:49:57 addons-952725 crio[959]: time="2024-10-07 10:49:57.904218284Z" level=info msg="Removed container a10b2cf1ed463b1cc35ff64b49e1566bcc412399b00fc5fdf87b1394b53c1612: ingress-nginx/ingress-nginx-admission-create-zzdln/create" id=f725b101-8c91-4620-b289-5f5a4ad6298f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 07 10:49:57 addons-952725 crio[959]: time="2024-10-07 10:49:57.905697904Z" level=info msg="Stopping pod sandbox: a9664ef1150342d105dbfba907f6ee427b20b8ecf3d3ff18137ffd8ef70470c7" id=df034208-e020-4e1f-a84d-a1880d817a51 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 10:49:57 addons-952725 crio[959]: time="2024-10-07 10:49:57.905735041Z" level=info msg="Stopped pod sandbox (already stopped): a9664ef1150342d105dbfba907f6ee427b20b8ecf3d3ff18137ffd8ef70470c7" id=df034208-e020-4e1f-a84d-a1880d817a51 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 10:49:57 addons-952725 crio[959]: time="2024-10-07 10:49:57.906122312Z" level=info msg="Removing pod sandbox: a9664ef1150342d105dbfba907f6ee427b20b8ecf3d3ff18137ffd8ef70470c7" id=bb0ff9ab-9559-483f-b248-451cfa3aa4e1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 07 10:49:57 addons-952725 crio[959]: time="2024-10-07 10:49:57.916646409Z" level=info msg="Removed pod sandbox: a9664ef1150342d105dbfba907f6ee427b20b8ecf3d3ff18137ffd8ef70470c7" id=bb0ff9ab-9559-483f-b248-451cfa3aa4e1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 07 10:49:57 addons-952725 crio[959]: time="2024-10-07 10:49:57.917141044Z" level=info msg="Stopping pod sandbox: 93c371384f624626b7f342d034b89ecb2d73c35fd9f42b1faf9e89b0c3b43b73" id=8936597d-ad26-457c-9033-825d41906e53 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 10:49:57 addons-952725 crio[959]: time="2024-10-07 10:49:57.917261453Z" level=info msg="Stopped pod sandbox (already stopped): 93c371384f624626b7f342d034b89ecb2d73c35fd9f42b1faf9e89b0c3b43b73" id=8936597d-ad26-457c-9033-825d41906e53 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 10:49:57 addons-952725 crio[959]: time="2024-10-07 10:49:57.917605738Z" level=info msg="Removing pod sandbox: 93c371384f624626b7f342d034b89ecb2d73c35fd9f42b1faf9e89b0c3b43b73" id=3a1896ce-2ebb-4689-8348-12482826ca8f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 07 10:49:57 addons-952725 crio[959]: time="2024-10-07 10:49:57.930499091Z" level=info msg="Removed pod sandbox: 93c371384f624626b7f342d034b89ecb2d73c35fd9f42b1faf9e89b0c3b43b73" id=3a1896ce-2ebb-4689-8348-12482826ca8f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 07 10:49:57 addons-952725 crio[959]: time="2024-10-07 10:49:57.931030903Z" level=info msg="Stopping pod sandbox: 428eed4a474cfb4e8764f7af39a1b2e3fac3a33237bc60e32a45233f3eea24d0" id=92493f83-abd1-479f-bb0a-ae0591187743 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 10:49:57 addons-952725 crio[959]: time="2024-10-07 10:49:57.931067612Z" level=info msg="Stopped pod sandbox (already stopped): 428eed4a474cfb4e8764f7af39a1b2e3fac3a33237bc60e32a45233f3eea24d0" id=92493f83-abd1-479f-bb0a-ae0591187743 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 10:49:57 addons-952725 crio[959]: time="2024-10-07 10:49:57.931412012Z" level=info msg="Removing pod sandbox: 428eed4a474cfb4e8764f7af39a1b2e3fac3a33237bc60e32a45233f3eea24d0" id=2c444d56-a6e3-46e7-ab52-63c5a7e167bc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 07 10:49:57 addons-952725 crio[959]: time="2024-10-07 10:49:57.939637048Z" level=info msg="Removed pod sandbox: 428eed4a474cfb4e8764f7af39a1b2e3fac3a33237bc60e32a45233f3eea24d0" id=2c444d56-a6e3-46e7-ab52-63c5a7e167bc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 07 10:49:57 addons-952725 crio[959]: time="2024-10-07 10:49:57.940131863Z" level=info msg="Stopping pod sandbox: 4d058b31939d030aae4a0d792bc96c1dcc9fb401af03839bdf32b5bca7450f5e" id=68b6c5f3-ade0-47c8-a388-42f17e247b8e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 10:49:57 addons-952725 crio[959]: time="2024-10-07 10:49:57.940361482Z" level=info msg="Stopped pod sandbox (already stopped): 4d058b31939d030aae4a0d792bc96c1dcc9fb401af03839bdf32b5bca7450f5e" id=68b6c5f3-ade0-47c8-a388-42f17e247b8e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 07 10:49:57 addons-952725 crio[959]: time="2024-10-07 10:49:57.940812711Z" level=info msg="Removing pod sandbox: 4d058b31939d030aae4a0d792bc96c1dcc9fb401af03839bdf32b5bca7450f5e" id=560d227a-438c-4e40-afb8-e278cc7bad01 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 07 10:49:57 addons-952725 crio[959]: time="2024-10-07 10:49:57.948914359Z" level=info msg="Removed pod sandbox: 4d058b31939d030aae4a0d792bc96c1dcc9fb401af03839bdf32b5bca7450f5e" id=560d227a-438c-4e40-afb8-e278cc7bad01 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f35a376b61999       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   f923962a6ac46       hello-world-app-55bf9c44b4-m2tq6
	f810b25076c63       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     3 minutes ago       Running             busybox                   0                   0c88881f09ada       busybox
	4956146809a68       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         5 minutes ago       Running             nginx                     0                   c48674f24f4ff       nginx
	774aa5975d8da       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98        16 minutes ago      Running             local-path-provisioner    0                   fe47699131a23       local-path-provisioner-86d989889c-kfkrw
	2a59e268f75cf       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   16 minutes ago      Running             metrics-server            0                   acd50e2015e95       metrics-server-84c5f94fbc-6vc27
	c313b70ac4ef3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        16 minutes ago      Running             storage-provisioner       0                   364a50ef72e82       storage-provisioner
	dd456422afb90       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                        16 minutes ago      Running             coredns                   0                   680d1bfd238f1       coredns-7c65d6cfc9-6b52m
	9bbd85a195c8e       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                        17 minutes ago      Running             kube-proxy                0                   97aad492f0b39       kube-proxy-9dvhw
	a841da971afdb       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                        17 minutes ago      Running             kindnet-cni               0                   741d891f245dd       kindnet-57hzc
	a3303a3c982ff       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                        17 minutes ago      Running             kube-apiserver            0                   c008e76944ff3       kube-apiserver-addons-952725
	0345daaeb9c8f       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                        17 minutes ago      Running             kube-controller-manager   0                   d2914c35e07ac       kube-controller-manager-addons-952725
	21c0c933f1c0a       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                        17 minutes ago      Running             kube-scheduler            0                   0ec301299d3ac       kube-scheduler-addons-952725
	db4e405a62283       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        17 minutes ago      Running             etcd                      0                   efc0fac896569       etcd-addons-952725
	
	
	==> coredns [dd456422afb9038bc7ee8a51e6283a5a4758085f9e76e5af7d921e328630e264] <==
	[INFO] 10.244.0.19:32907 - 31320 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000066108s
	[INFO] 10.244.0.19:34195 - 39410 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002535203s
	[INFO] 10.244.0.19:32907 - 19007 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001618623s
	[INFO] 10.244.0.19:32907 - 59512 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002569509s
	[INFO] 10.244.0.19:34195 - 6694 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00186s
	[INFO] 10.244.0.19:32907 - 15088 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00017389s
	[INFO] 10.244.0.19:34195 - 1370 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075339s
	[INFO] 10.244.0.19:55462 - 11214 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000152098s
	[INFO] 10.244.0.19:55462 - 13899 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000075536s
	[INFO] 10.244.0.19:55462 - 20725 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000066108s
	[INFO] 10.244.0.19:55462 - 41968 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000063901s
	[INFO] 10.244.0.19:55462 - 64198 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057173s
	[INFO] 10.244.0.19:55462 - 51076 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000047179s
	[INFO] 10.244.0.19:43697 - 9144 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000103039s
	[INFO] 10.244.0.19:55462 - 28678 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002022287s
	[INFO] 10.244.0.19:43697 - 15575 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000073632s
	[INFO] 10.244.0.19:43697 - 2024 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000066838s
	[INFO] 10.244.0.19:43697 - 22589 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000244068s
	[INFO] 10.244.0.19:55462 - 32999 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001534045s
	[INFO] 10.244.0.19:43697 - 26822 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000134358s
	[INFO] 10.244.0.19:43697 - 52494 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000083495s
	[INFO] 10.244.0.19:55462 - 13646 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000241631s
	[INFO] 10.244.0.19:43697 - 47616 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001578188s
	[INFO] 10.244.0.19:43697 - 39009 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001682664s
	[INFO] 10.244.0.19:43697 - 22038 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000089797s
	
	
	==> describe nodes <==
	Name:               addons-952725
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-952725
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=addons-952725
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T10_34_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-952725
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:34:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-952725
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 10:52:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 10:50:04 +0000   Mon, 07 Oct 2024 10:34:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 10:50:04 +0000   Mon, 07 Oct 2024 10:34:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 10:50:04 +0000   Mon, 07 Oct 2024 10:34:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 10:50:04 +0000   Mon, 07 Oct 2024 10:35:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    addons-952725
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 667ded5c667f48beaa77c802f4949d98
	  System UUID:                89c65a8b-1435-4af7-a1ff-2983fc44da00
	  Boot ID:                    9a8fefe6-3962-4cb9-809a-2b740ac8992f
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-m2tq6           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 coredns-7c65d6cfc9-6b52m                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-addons-952725                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-57hzc                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-addons-952725               250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-addons-952725      200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-9dvhw                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-addons-952725               100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 metrics-server-84c5f94fbc-6vc27            100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         17m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  local-path-storage          local-path-provisioner-86d989889c-kfkrw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 17m   kube-proxy       
	  Normal   Starting                 17m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  17m   kubelet          Node addons-952725 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m   kubelet          Node addons-952725 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m   kubelet          Node addons-952725 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m   node-controller  Node addons-952725 event: Registered Node addons-952725 in Controller
	  Normal   NodeReady                16m   kubelet          Node addons-952725 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [db4e405a6228378a7747e038be9a5f99723ff26ab14544258ee746201d319c9e] <==
	{"level":"info","ts":"2024-10-07T10:34:51.252486Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:addons-952725 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-07T10:34:51.252618Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T10:34:51.252856Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T10:34:51.253012Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T10:34:51.253076Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T10:34:51.253098Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T10:34:51.253696Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T10:34:51.254518Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2024-10-07T10:34:51.255269Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T10:34:51.264925Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-07T10:34:51.266572Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-07T10:34:51.266603Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-07T10:35:03.535075Z","caller":"traceutil/trace.go:171","msg":"trace[2132271953] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"284.96651ms","start":"2024-10-07T10:35:03.250093Z","end":"2024-10-07T10:35:03.535060Z","steps":["trace[2132271953] 'process raft request'  (duration: 284.86585ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T10:35:03.673545Z","caller":"traceutil/trace.go:171","msg":"trace[1695671777] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"188.156093ms","start":"2024-10-07T10:35:03.485379Z","end":"2024-10-07T10:35:03.673535Z","steps":["trace[1695671777] 'process raft request'  (duration: 140.968257ms)","trace[1695671777] 'compare'  (duration: 47.016137ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T10:35:03.673488Z","caller":"traceutil/trace.go:171","msg":"trace[214699764] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"166.83918ms","start":"2024-10-07T10:35:03.506636Z","end":"2024-10-07T10:35:03.673475Z","steps":["trace[214699764] 'process raft request'  (duration: 166.801207ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T10:35:03.878205Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T10:35:03.506608Z","time spent":"371.140045ms","remote":"127.0.0.1:56670","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3978,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-shtgd\" mod_revision:330 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-shtgd\" value_size:3919 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-shtgd\" > >"}
	{"level":"info","ts":"2024-10-07T10:35:03.931468Z","caller":"traceutil/trace.go:171","msg":"trace[732002138] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"201.577926ms","start":"2024-10-07T10:35:03.729870Z","end":"2024-10-07T10:35:03.931448Z","steps":["trace[732002138] 'process raft request'  (duration: 158.749519ms)","trace[732002138] 'compare'  (duration: 42.586169ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T10:35:04.868891Z","caller":"traceutil/trace.go:171","msg":"trace[1459735235] transaction","detail":"{read_only:false; response_revision:346; number_of_response:1; }","duration":"114.751638ms","start":"2024-10-07T10:35:04.754121Z","end":"2024-10-07T10:35:04.868873Z","steps":["trace[1459735235] 'process raft request'  (duration: 114.599114ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T10:35:06.268722Z","caller":"traceutil/trace.go:171","msg":"trace[2118704945] transaction","detail":"{read_only:false; response_revision:358; number_of_response:1; }","duration":"120.02973ms","start":"2024-10-07T10:35:06.148673Z","end":"2024-10-07T10:35:06.268702Z","steps":["trace[2118704945] 'process raft request'  (duration: 115.764183ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T10:44:52.166972Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1473}
	{"level":"info","ts":"2024-10-07T10:44:52.198132Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1473,"took":"30.736316ms","hash":748900064,"current-db-size-bytes":6008832,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":3014656,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2024-10-07T10:44:52.198199Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":748900064,"revision":1473,"compact-revision":-1}
	{"level":"info","ts":"2024-10-07T10:49:52.205935Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1887}
	{"level":"info","ts":"2024-10-07T10:49:52.237902Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1887,"took":"31.30561ms","hash":1445391372,"current-db-size-bytes":6008832,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":5009408,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-10-07T10:49:52.237952Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1445391372,"revision":1887,"compact-revision":1473}
	
	
	==> kernel <==
	 10:52:18 up  6:34,  0 users,  load average: 0.20, 0.38, 1.10
	Linux addons-952725 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [a841da971afdb087bad2d9333829157fb296b646796d8b40f1ff566751d25273] <==
	I1007 10:50:15.675174       1 main.go:299] handling current node
	I1007 10:50:25.673783       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:50:25.673820       1 main.go:299] handling current node
	I1007 10:50:35.672477       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:50:35.672601       1 main.go:299] handling current node
	I1007 10:50:45.673731       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:50:45.673780       1 main.go:299] handling current node
	I1007 10:50:55.672830       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:50:55.672961       1 main.go:299] handling current node
	I1007 10:51:05.673221       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:51:05.673254       1 main.go:299] handling current node
	I1007 10:51:15.673863       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:51:15.673902       1 main.go:299] handling current node
	I1007 10:51:25.679598       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:51:25.679637       1 main.go:299] handling current node
	I1007 10:51:35.673038       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:51:35.673167       1 main.go:299] handling current node
	I1007 10:51:45.672236       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:51:45.672392       1 main.go:299] handling current node
	I1007 10:51:55.674950       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:51:55.674986       1 main.go:299] handling current node
	I1007 10:52:05.672266       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:52:05.672304       1 main.go:299] handling current node
	I1007 10:52:15.680328       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 10:52:15.680362       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a3303a3c982ff4f13d1b2f93170548744f9251f242247d0d24a442cd94e03b3a] <==
	 > logger="UnhandledError"
	E1007 10:36:56.227761       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1007 10:36:56.237336       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1007 10:45:58.327959       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1007 10:46:24.302888       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.238.130"}
	I1007 10:46:27.302998       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1007 10:46:55.770138       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1007 10:46:56.666218       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 10:46:56.674488       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 10:46:56.707534       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 10:46:56.707690       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 10:46:56.770325       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 10:46:56.771113       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 10:46:56.846911       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 10:46:56.846961       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1007 10:46:56.850959       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1007 10:46:56.851001       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1007 10:46:57.848508       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1007 10:46:57.851773       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1007 10:46:57.954494       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1007 10:47:09.626739       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1007 10:47:10.667362       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1007 10:47:15.188377       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1007 10:47:15.491369       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.61.33"}
	I1007 10:49:34.709827       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.233.217"}
	
	
	==> kube-controller-manager [0345daaeb9c8fd8e3f96d12f8756508d11194f14629f360a7aa9a10102bbfbb0] <==
	I1007 10:50:04.426612       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-952725"
	W1007 10:50:21.640951       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:50:21.641007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:50:22.077760       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:50:22.077807       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:50:23.670740       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:50:23.670794       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:50:33.703191       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:50:33.703234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:51:10.144302       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:51:10.144362       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:51:15.641520       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:51:15.641563       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:51:18.262714       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:51:18.262764       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:51:22.075229       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:51:22.075272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:51:51.809356       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:51:51.809401       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:51:53.644126       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:51:53.644168       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:51:59.895893       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:51:59.896014       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 10:52:14.074835       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 10:52:14.074878       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [9bbd85a195c8eed589ffbc42fa92db6b9707afa6b2498d03cf2e2b985d462354] <==
	I1007 10:35:07.333905       1 server_linux.go:66] "Using iptables proxy"
	I1007 10:35:08.233196       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.58.2"]
	E1007 10:35:08.238207       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 10:35:08.292390       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1007 10:35:08.292522       1 server_linux.go:169] "Using iptables Proxier"
	I1007 10:35:08.294294       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 10:35:08.294746       1 server.go:483] "Version info" version="v1.31.1"
	I1007 10:35:08.294934       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 10:35:08.296089       1 config.go:199] "Starting service config controller"
	I1007 10:35:08.296168       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 10:35:08.296236       1 config.go:105] "Starting endpoint slice config controller"
	I1007 10:35:08.296564       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 10:35:08.298228       1 config.go:328] "Starting node config controller"
	I1007 10:35:08.298433       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 10:35:08.400005       1 shared_informer.go:320] Caches are synced for node config
	I1007 10:35:08.400237       1 shared_informer.go:320] Caches are synced for service config
	I1007 10:35:08.400284       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [21c0c933f1c0a7184a0cc09446430d50dab64e1b2d6072dfc18377e7cf4b8e0e] <==
	W1007 10:34:55.590927       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 10:34:55.590994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 10:34:55.591021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 10:34:55.591105       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 10:34:55.591114       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 10:34:55.591195       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:34:55.591165       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1007 10:34:55.591278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 10:34:55.591303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E1007 10:34:55.591289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 10:34:55.591408       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 10:34:55.591425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:34:55.591482       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 10:34:55.591498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 10:34:55.591490       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1007 10:34:55.591576       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1007 10:34:55.591594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1007 10:34:55.591574       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:34:55.591544       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 10:34:55.591671       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 10:34:55.591068       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 10:34:55.591697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1007 10:34:55.590964       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 10:34:55.591717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1007 10:34:56.784424       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 10:50:27 addons-952725 kubelet[1502]: E1007 10:50:27.715873    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298227715626596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:50:27 addons-952725 kubelet[1502]: E1007 10:50:27.715915    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298227715626596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:50:37 addons-952725 kubelet[1502]: E1007 10:50:37.718530    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298237718291342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:50:37 addons-952725 kubelet[1502]: E1007 10:50:37.718568    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298237718291342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:50:47 addons-952725 kubelet[1502]: E1007 10:50:47.721559    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298247721359486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:50:47 addons-952725 kubelet[1502]: E1007 10:50:47.721603    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298247721359486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:50:57 addons-952725 kubelet[1502]: E1007 10:50:57.724582    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298257724366186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:50:57 addons-952725 kubelet[1502]: E1007 10:50:57.724619    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298257724366186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:01 addons-952725 kubelet[1502]: I1007 10:51:01.358648    1502 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 10:51:07 addons-952725 kubelet[1502]: E1007 10:51:07.727897    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298267726931086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:07 addons-952725 kubelet[1502]: E1007 10:51:07.727939    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298267726931086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:17 addons-952725 kubelet[1502]: E1007 10:51:17.730371    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298277730118268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:17 addons-952725 kubelet[1502]: E1007 10:51:17.730411    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298277730118268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:27 addons-952725 kubelet[1502]: E1007 10:51:27.732805    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298287732585438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:27 addons-952725 kubelet[1502]: E1007 10:51:27.732843    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298287732585438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:37 addons-952725 kubelet[1502]: E1007 10:51:37.735506    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298297735283759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:37 addons-952725 kubelet[1502]: E1007 10:51:37.735546    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298297735283759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:47 addons-952725 kubelet[1502]: E1007 10:51:47.738745    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298307738493147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:47 addons-952725 kubelet[1502]: E1007 10:51:47.738781    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298307738493147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:57 addons-952725 kubelet[1502]: E1007 10:51:57.741739    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298317741503319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:51:57 addons-952725 kubelet[1502]: E1007 10:51:57.741776    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298317741503319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:07 addons-952725 kubelet[1502]: E1007 10:52:07.744894    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298327744562438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:07 addons-952725 kubelet[1502]: E1007 10:52:07.744933    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298327744562438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:17 addons-952725 kubelet[1502]: E1007 10:52:17.748740    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298337748454414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 10:52:17 addons-952725 kubelet[1502]: E1007 10:52:17.748774    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728298337748454414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596721,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [c313b70ac4ef3b6236c0631c0052cfe4654849c5b111986e269fc26359577187] <==
	I1007 10:35:46.887649       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 10:35:46.902053       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 10:35:46.902272       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 10:35:46.912935       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 10:35:46.913198       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-952725_5cbfe9d8-3535-413a-83a5-d5ab752ef72f!
	I1007 10:35:46.914206       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e74caa12-d903-4476-a949-8f0d8e425f00", APIVersion:"v1", ResourceVersion:"879", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-952725_5cbfe9d8-3535-413a-83a5-d5ab752ef72f became leader
	I1007 10:35:47.014065       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-952725_5cbfe9d8-3535-413a-83a5-d5ab752ef72f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-952725 -n addons-952725
helpers_test.go:261: (dbg) Run:  kubectl --context addons-952725 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-952725 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (340.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (128.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-685971 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1007 11:05:44.824718  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:06:12.527459  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-685971 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m3.74809203s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:591: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-685971       NotReady   control-plane   10m     v1.31.1
	ha-685971-m02   Ready      control-plane   9m48s   v1.31.1
	ha-685971-m04   Ready      <none>          7m22s   v1.31.1

                                                
                                                
-- /stdout --
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:599: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-685971
helpers_test.go:235: (dbg) docker inspect ha-685971:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8be7d7009bbdc17b7dbeb3d71a28fc053eeec09e04ffd4d2a1c56430f01cda88",
	        "Created": "2024-10-07T10:56:47.417638408Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 963047,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-07T11:05:19.915629601Z",
	            "FinishedAt": "2024-10-07T11:05:19.184985337Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/8be7d7009bbdc17b7dbeb3d71a28fc053eeec09e04ffd4d2a1c56430f01cda88/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8be7d7009bbdc17b7dbeb3d71a28fc053eeec09e04ffd4d2a1c56430f01cda88/hostname",
	        "HostsPath": "/var/lib/docker/containers/8be7d7009bbdc17b7dbeb3d71a28fc053eeec09e04ffd4d2a1c56430f01cda88/hosts",
	        "LogPath": "/var/lib/docker/containers/8be7d7009bbdc17b7dbeb3d71a28fc053eeec09e04ffd4d2a1c56430f01cda88/8be7d7009bbdc17b7dbeb3d71a28fc053eeec09e04ffd4d2a1c56430f01cda88-json.log",
	        "Name": "/ha-685971",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-685971:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-685971",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bd134ca348993e6208e24163fa53a4c4f1fbd7f02e29e4639ee5f6764696bfc1-init/diff:/var/lib/docker/overlay2/679cc8fccbb0902884eb141037cc21fc6e7a2efac609a53e07ea6b92675ef1c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd134ca348993e6208e24163fa53a4c4f1fbd7f02e29e4639ee5f6764696bfc1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd134ca348993e6208e24163fa53a4c4f1fbd7f02e29e4639ee5f6764696bfc1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd134ca348993e6208e24163fa53a4c4f1fbd7f02e29e4639ee5f6764696bfc1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-685971",
	                "Source": "/var/lib/docker/volumes/ha-685971/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-685971",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-685971",
	                "name.minikube.sigs.k8s.io": "ha-685971",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "27f4615c538769504481e0514ae295816a29c41400d17e169640562e6090f9c9",
	            "SandboxKey": "/var/run/docker/netns/27f4615c5387",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33941"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33942"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33945"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33943"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33944"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-685971": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null,
	                    "NetworkID": "93be4a3fd51e73820e1380e82de407e03e88b5d35cdc364517872a0aec01043d",
	                    "EndpointID": "8ddef620c28e339ddeca520f0291131ce4c739d4076f1ce87d806055870d063a",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-685971",
	                        "8be7d7009bbd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-685971 -n ha-685971
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ha-685971 logs -n 25: (2.093022256s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-685971 cp ha-685971-m03:/home/docker/cp-test.txt                              | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:00 UTC | 07 Oct 24 11:00 UTC |
	|         | ha-685971-m04:/home/docker/cp-test_ha-685971-m03_ha-685971-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-685971 ssh -n                                                                 | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:00 UTC | 07 Oct 24 11:00 UTC |
	|         | ha-685971-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-685971 ssh -n ha-685971-m04 sudo cat                                          | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:00 UTC | 07 Oct 24 11:00 UTC |
	|         | /home/docker/cp-test_ha-685971-m03_ha-685971-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-685971 cp testdata/cp-test.txt                                                | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:00 UTC | 07 Oct 24 11:00 UTC |
	|         | ha-685971-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-685971 ssh -n                                                                 | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:00 UTC | 07 Oct 24 11:00 UTC |
	|         | ha-685971-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-685971 cp ha-685971-m04:/home/docker/cp-test.txt                              | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:00 UTC | 07 Oct 24 11:00 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2126305494/001/cp-test_ha-685971-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-685971 ssh -n                                                                 | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:00 UTC | 07 Oct 24 11:00 UTC |
	|         | ha-685971-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-685971 cp ha-685971-m04:/home/docker/cp-test.txt                              | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:00 UTC | 07 Oct 24 11:00 UTC |
	|         | ha-685971:/home/docker/cp-test_ha-685971-m04_ha-685971.txt                       |           |         |         |                     |                     |
	| ssh     | ha-685971 ssh -n                                                                 | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:00 UTC | 07 Oct 24 11:00 UTC |
	|         | ha-685971-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-685971 ssh -n ha-685971 sudo cat                                              | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:00 UTC | 07 Oct 24 11:00 UTC |
	|         | /home/docker/cp-test_ha-685971-m04_ha-685971.txt                                 |           |         |         |                     |                     |
	| cp      | ha-685971 cp ha-685971-m04:/home/docker/cp-test.txt                              | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:00 UTC | 07 Oct 24 11:00 UTC |
	|         | ha-685971-m02:/home/docker/cp-test_ha-685971-m04_ha-685971-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-685971 ssh -n                                                                 | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:00 UTC | 07 Oct 24 11:00 UTC |
	|         | ha-685971-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-685971 ssh -n ha-685971-m02 sudo cat                                          | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:00 UTC | 07 Oct 24 11:00 UTC |
	|         | /home/docker/cp-test_ha-685971-m04_ha-685971-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-685971 cp ha-685971-m04:/home/docker/cp-test.txt                              | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:00 UTC | 07 Oct 24 11:00 UTC |
	|         | ha-685971-m03:/home/docker/cp-test_ha-685971-m04_ha-685971-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-685971 ssh -n                                                                 | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:00 UTC | 07 Oct 24 11:00 UTC |
	|         | ha-685971-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-685971 ssh -n ha-685971-m03 sudo cat                                          | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:00 UTC | 07 Oct 24 11:00 UTC |
	|         | /home/docker/cp-test_ha-685971-m04_ha-685971-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-685971 node stop m02 -v=7                                                     | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:00 UTC | 07 Oct 24 11:00 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-685971 node start m02 -v=7                                                    | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:00 UTC | 07 Oct 24 11:01 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-685971 -v=7                                                           | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-685971 -v=7                                                                | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:01 UTC | 07 Oct 24 11:01 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-685971 --wait=true -v=7                                                    | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:01 UTC | 07 Oct 24 11:04 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-685971                                                                | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:04 UTC |                     |
	| node    | ha-685971 node delete m03 -v=7                                                   | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:04 UTC | 07 Oct 24 11:04 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-685971 stop -v=7                                                              | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:04 UTC | 07 Oct 24 11:05 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-685971 --wait=true                                                         | ha-685971 | jenkins | v1.34.0 | 07 Oct 24 11:05 UTC | 07 Oct 24 11:07 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:05:19
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:05:19.596548  962848 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:05:19.596808  962848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:05:19.596839  962848 out.go:358] Setting ErrFile to fd 2...
	I1007 11:05:19.596862  962848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:05:19.597133  962848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-891319/.minikube/bin
	I1007 11:05:19.597540  962848 out.go:352] Setting JSON to false
	I1007 11:05:19.598530  962848 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24464,"bootTime":1728274656,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1007 11:05:19.598629  962848 start.go:139] virtualization:  
	I1007 11:05:19.601846  962848 out.go:177] * [ha-685971] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 11:05:19.603808  962848 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 11:05:19.603870  962848 notify.go:220] Checking for updates...
	I1007 11:05:19.608168  962848 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:05:19.610192  962848 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-891319/kubeconfig
	I1007 11:05:19.611971  962848 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-891319/.minikube
	I1007 11:05:19.614314  962848 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 11:05:19.616403  962848 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 11:05:19.618642  962848 config.go:182] Loaded profile config "ha-685971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:05:19.619151  962848 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:05:19.641858  962848 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 11:05:19.641979  962848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 11:05:19.697028  962848 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:1 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:51 SystemTime:2024-10-07 11:05:19.687310494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 11:05:19.697139  962848 docker.go:318] overlay module found
	I1007 11:05:19.699138  962848 out.go:177] * Using the docker driver based on existing profile
	I1007 11:05:19.701505  962848 start.go:297] selected driver: docker
	I1007 11:05:19.701528  962848 start.go:901] validating driver "docker" against &{Name:ha-685971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-685971 Namespace:default APIServerHAVIP:192.168.58.254 APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.58.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logvi
ewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:05:19.701674  962848 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 11:05:19.701781  962848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 11:05:19.751071  962848 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:1 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:51 SystemTime:2024-10-07 11:05:19.741936325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 11:05:19.751542  962848 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 11:05:19.751575  962848 cni.go:84] Creating CNI manager for ""
	I1007 11:05:19.751618  962848 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1007 11:05:19.751675  962848 start.go:340] cluster config:
	{Name:ha-685971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-685971 Namespace:default APIServerHAVIP:192.168.58.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.58.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvi
dia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:05:19.755674  962848 out.go:177] * Starting "ha-685971" primary control-plane node in "ha-685971" cluster
	I1007 11:05:19.757910  962848 cache.go:121] Beginning downloading kic base image for docker with crio
	I1007 11:05:19.759780  962848 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 11:05:19.761705  962848 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:05:19.761763  962848 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-891319/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1007 11:05:19.761790  962848 cache.go:56] Caching tarball of preloaded images
	I1007 11:05:19.761787  962848 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 11:05:19.761870  962848 preload.go:172] Found /home/jenkins/minikube-integration/19761-891319/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1007 11:05:19.761881  962848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 11:05:19.762017  962848 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/config.json ...
	I1007 11:05:19.786039  962848 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1007 11:05:19.786065  962848 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1007 11:05:19.786079  962848 cache.go:194] Successfully downloaded all kic artifacts
	I1007 11:05:19.786102  962848 start.go:360] acquireMachinesLock for ha-685971: {Name:mk92ab6c37e57a99c4e11b9f4f73c335e2491efe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:05:19.786158  962848 start.go:364] duration metric: took 34.552µs to acquireMachinesLock for "ha-685971"
	I1007 11:05:19.786184  962848 start.go:96] Skipping create...Using existing machine configuration
	I1007 11:05:19.786200  962848 fix.go:54] fixHost starting: 
	I1007 11:05:19.786448  962848 cli_runner.go:164] Run: docker container inspect ha-685971 --format={{.State.Status}}
	I1007 11:05:19.801376  962848 fix.go:112] recreateIfNeeded on ha-685971: state=Stopped err=<nil>
	W1007 11:05:19.801416  962848 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 11:05:19.805188  962848 out.go:177] * Restarting existing docker container for "ha-685971" ...
	I1007 11:05:19.806984  962848 cli_runner.go:164] Run: docker start ha-685971
	I1007 11:05:20.089421  962848 cli_runner.go:164] Run: docker container inspect ha-685971 --format={{.State.Status}}
	I1007 11:05:20.110581  962848 kic.go:430] container "ha-685971" state is running.
	I1007 11:05:20.110994  962848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-685971
	I1007 11:05:20.138456  962848 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/config.json ...
	I1007 11:05:20.138714  962848 machine.go:93] provisionDockerMachine start ...
	I1007 11:05:20.138779  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971
	I1007 11:05:20.159084  962848 main.go:141] libmachine: Using SSH client type: native
	I1007 11:05:20.159366  962848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33941 <nil> <nil>}
	I1007 11:05:20.159376  962848 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 11:05:20.160913  962848 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51808->127.0.0.1:33941: read: connection reset by peer
	I1007 11:05:23.299695  962848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-685971
	
	I1007 11:05:23.299724  962848 ubuntu.go:169] provisioning hostname "ha-685971"
	I1007 11:05:23.299810  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971
	I1007 11:05:23.317186  962848 main.go:141] libmachine: Using SSH client type: native
	I1007 11:05:23.317515  962848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33941 <nil> <nil>}
	I1007 11:05:23.317531  962848 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-685971 && echo "ha-685971" | sudo tee /etc/hostname
	I1007 11:05:23.463664  962848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-685971
	
	I1007 11:05:23.463755  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971
	I1007 11:05:23.480539  962848 main.go:141] libmachine: Using SSH client type: native
	I1007 11:05:23.480864  962848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33941 <nil> <nil>}
	I1007 11:05:23.480893  962848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-685971' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-685971/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-685971' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 11:05:23.616937  962848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 11:05:23.616966  962848 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19761-891319/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-891319/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-891319/.minikube}
	I1007 11:05:23.616989  962848 ubuntu.go:177] setting up certificates
	I1007 11:05:23.617001  962848 provision.go:84] configureAuth start
	I1007 11:05:23.617060  962848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-685971
	I1007 11:05:23.636940  962848 provision.go:143] copyHostCerts
	I1007 11:05:23.636993  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-891319/.minikube/ca.pem
	I1007 11:05:23.637033  962848 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-891319/.minikube/ca.pem, removing ...
	I1007 11:05:23.637047  962848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-891319/.minikube/ca.pem
	I1007 11:05:23.637125  962848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-891319/.minikube/ca.pem (1078 bytes)
	I1007 11:05:23.637221  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-891319/.minikube/cert.pem
	I1007 11:05:23.637249  962848 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-891319/.minikube/cert.pem, removing ...
	I1007 11:05:23.637258  962848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-891319/.minikube/cert.pem
	I1007 11:05:23.637287  962848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-891319/.minikube/cert.pem (1123 bytes)
	I1007 11:05:23.637338  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-891319/.minikube/key.pem
	I1007 11:05:23.637365  962848 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-891319/.minikube/key.pem, removing ...
	I1007 11:05:23.637373  962848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-891319/.minikube/key.pem
	I1007 11:05:23.637401  962848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-891319/.minikube/key.pem (1679 bytes)
	I1007 11:05:23.637455  962848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca-key.pem org=jenkins.ha-685971 san=[127.0.0.1 192.168.58.2 ha-685971 localhost minikube]
	I1007 11:05:25.038781  962848 provision.go:177] copyRemoteCerts
	I1007 11:05:25.038868  962848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 11:05:25.038918  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971
	I1007 11:05:25.057612  962848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33941 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/ha-685971/id_rsa Username:docker}
	I1007 11:05:25.154513  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 11:05:25.154579  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 11:05:25.182459  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 11:05:25.182528  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1007 11:05:25.207213  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 11:05:25.207280  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1007 11:05:25.232521  962848 provision.go:87] duration metric: took 1.615506631s to configureAuth
	I1007 11:05:25.232592  962848 ubuntu.go:193] setting minikube options for container-runtime
	I1007 11:05:25.232851  962848 config.go:182] Loaded profile config "ha-685971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:05:25.232968  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971
	I1007 11:05:25.249288  962848 main.go:141] libmachine: Using SSH client type: native
	I1007 11:05:25.249528  962848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33941 <nil> <nil>}
	I1007 11:05:25.249550  962848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 11:05:25.673270  962848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 11:05:25.673296  962848 machine.go:96] duration metric: took 5.534564735s to provisionDockerMachine
	I1007 11:05:25.673308  962848 start.go:293] postStartSetup for "ha-685971" (driver="docker")
	I1007 11:05:25.673320  962848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 11:05:25.673399  962848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 11:05:25.673446  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971
	I1007 11:05:25.694652  962848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33941 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/ha-685971/id_rsa Username:docker}
	I1007 11:05:25.793365  962848 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 11:05:25.796568  962848 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 11:05:25.796612  962848 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 11:05:25.796625  962848 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 11:05:25.796632  962848 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 11:05:25.796647  962848 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-891319/.minikube/addons for local assets ...
	I1007 11:05:25.796710  962848 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-891319/.minikube/files for local assets ...
	I1007 11:05:25.796794  962848 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-891319/.minikube/files/etc/ssl/certs/8967262.pem -> 8967262.pem in /etc/ssl/certs
	I1007 11:05:25.796806  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/files/etc/ssl/certs/8967262.pem -> /etc/ssl/certs/8967262.pem
	I1007 11:05:25.796908  962848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 11:05:25.805621  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/files/etc/ssl/certs/8967262.pem --> /etc/ssl/certs/8967262.pem (1708 bytes)
	I1007 11:05:25.830703  962848 start.go:296] duration metric: took 157.378574ms for postStartSetup
	I1007 11:05:25.830841  962848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 11:05:25.830888  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971
	I1007 11:05:25.847178  962848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33941 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/ha-685971/id_rsa Username:docker}
	I1007 11:05:25.941777  962848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 11:05:25.946250  962848 fix.go:56] duration metric: took 6.160048271s for fixHost
	I1007 11:05:25.946317  962848 start.go:83] releasing machines lock for "ha-685971", held for 6.160143754s
	I1007 11:05:25.946395  962848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-685971
	I1007 11:05:25.962374  962848 ssh_runner.go:195] Run: cat /version.json
	I1007 11:05:25.962435  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971
	I1007 11:05:25.962694  962848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 11:05:25.962767  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971
	I1007 11:05:25.985791  962848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33941 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/ha-685971/id_rsa Username:docker}
	I1007 11:05:25.989879  962848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33941 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/ha-685971/id_rsa Username:docker}
	I1007 11:05:26.085096  962848 ssh_runner.go:195] Run: systemctl --version
	I1007 11:05:26.223727  962848 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 11:05:26.367627  962848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 11:05:26.371912  962848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 11:05:26.381286  962848 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1007 11:05:26.381380  962848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 11:05:26.390494  962848 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 11:05:26.390519  962848 start.go:495] detecting cgroup driver to use...
	I1007 11:05:26.390550  962848 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 11:05:26.390604  962848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 11:05:26.402296  962848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 11:05:26.413945  962848 docker.go:217] disabling cri-docker service (if available) ...
	I1007 11:05:26.414017  962848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 11:05:26.427655  962848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 11:05:26.440596  962848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 11:05:26.521319  962848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 11:05:26.614081  962848 docker.go:233] disabling docker service ...
	I1007 11:05:26.614166  962848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 11:05:26.627715  962848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 11:05:26.639900  962848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 11:05:26.737895  962848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 11:05:26.826060  962848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 11:05:26.837904  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 11:05:26.854548  962848 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 11:05:26.854637  962848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:05:26.865809  962848 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 11:05:26.865881  962848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:05:26.876559  962848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:05:26.887171  962848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:05:26.898011  962848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 11:05:26.908146  962848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:05:26.918182  962848 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:05:26.927990  962848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:05:26.938263  962848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 11:05:26.946788  962848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 11:05:26.955088  962848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:05:27.050821  962848 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 11:05:27.174964  962848 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 11:05:27.175106  962848 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 11:05:27.179956  962848 start.go:563] Will wait 60s for crictl version
	I1007 11:05:27.180065  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:05:27.183480  962848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 11:05:27.220578  962848 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1007 11:05:27.220726  962848 ssh_runner.go:195] Run: crio --version
	I1007 11:05:27.264039  962848 ssh_runner.go:195] Run: crio --version
	I1007 11:05:27.305538  962848 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1007 11:05:27.307396  962848 cli_runner.go:164] Run: docker network inspect ha-685971 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 11:05:27.322914  962848 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1007 11:05:27.326347  962848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 11:05:27.337416  962848 kubeadm.go:883] updating cluster {Name:ha-685971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-685971 Namespace:default APIServerHAVIP:192.168.58.254 APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.58.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false me
tallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 11:05:27.337576  962848 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:05:27.337647  962848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:05:27.383221  962848 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 11:05:27.383246  962848 crio.go:433] Images already preloaded, skipping extraction
	I1007 11:05:27.383300  962848 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:05:27.418978  962848 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 11:05:27.419003  962848 cache_images.go:84] Images are preloaded, skipping loading
	I1007 11:05:27.419019  962848 kubeadm.go:934] updating node { 192.168.58.2 8443 v1.31.1 crio true true} ...
	I1007 11:05:27.419122  962848 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-685971 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-685971 Namespace:default APIServerHAVIP:192.168.58.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 11:05:27.419206  962848 ssh_runner.go:195] Run: crio config
	I1007 11:05:27.473100  962848 cni.go:84] Creating CNI manager for ""
	I1007 11:05:27.473124  962848 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1007 11:05:27.473133  962848 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 11:05:27.473183  962848 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-685971 NodeName:ha-685971 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 11:05:27.473369  962848 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-685971"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 11:05:27.473393  962848 kube-vip.go:115] generating kube-vip config ...
	I1007 11:05:27.473444  962848 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1007 11:05:27.486381  962848 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 11:05:27.486482  962848 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.58.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 11:05:27.486543  962848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 11:05:27.495490  962848 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 11:05:27.495573  962848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 11:05:27.504779  962848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1007 11:05:27.522454  962848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 11:05:27.539947  962848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I1007 11:05:27.557567  962848 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 11:05:27.575556  962848 ssh_runner.go:195] Run: grep 192.168.58.254	control-plane.minikube.internal$ /etc/hosts
	I1007 11:05:27.579037  962848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 11:05:27.589634  962848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:05:27.668406  962848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:05:27.682671  962848 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971 for IP: 192.168.58.2
	I1007 11:05:27.682694  962848 certs.go:194] generating shared ca certs ...
	I1007 11:05:27.682710  962848 certs.go:226] acquiring lock for ca certs: {Name:mkd5251b1f18df70f58bf1f19694372431d4d649 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:05:27.682920  962848 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-891319/.minikube/ca.key
	I1007 11:05:27.683000  962848 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.key
	I1007 11:05:27.683015  962848 certs.go:256] generating profile certs ...
	I1007 11:05:27.683128  962848 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/client.key
	I1007 11:05:27.683171  962848 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/apiserver.key.d7d29f43
	I1007 11:05:27.683209  962848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/apiserver.crt.d7d29f43 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.58.2 192.168.58.3 192.168.58.254]
	I1007 11:05:27.972003  962848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/apiserver.crt.d7d29f43 ...
	I1007 11:05:27.972035  962848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/apiserver.crt.d7d29f43: {Name:mk3fd675780c52a976629b6d2fffafbeac8e9b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:05:27.972260  962848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/apiserver.key.d7d29f43 ...
	I1007 11:05:27.972275  962848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/apiserver.key.d7d29f43: {Name:mk367c53b678fccecb9e95c204cedc13b1c3d714 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:05:27.972371  962848 certs.go:381] copying /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/apiserver.crt.d7d29f43 -> /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/apiserver.crt
	I1007 11:05:27.972507  962848 certs.go:385] copying /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/apiserver.key.d7d29f43 -> /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/apiserver.key
	I1007 11:05:27.972646  962848 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/proxy-client.key
	I1007 11:05:27.972664  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 11:05:27.972679  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 11:05:27.972696  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 11:05:27.972711  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 11:05:27.972722  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 11:05:27.972737  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 11:05:27.972758  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 11:05:27.972772  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 11:05:27.972819  962848 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/896726.pem (1338 bytes)
	W1007 11:05:27.972851  962848 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-891319/.minikube/certs/896726_empty.pem, impossibly tiny 0 bytes
	I1007 11:05:27.972866  962848 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 11:05:27.972894  962848 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem (1078 bytes)
	I1007 11:05:27.972922  962848 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/cert.pem (1123 bytes)
	I1007 11:05:27.972946  962848 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/key.pem (1679 bytes)
	I1007 11:05:27.972992  962848 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/files/etc/ssl/certs/8967262.pem (1708 bytes)
	I1007 11:05:27.973022  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/896726.pem -> /usr/share/ca-certificates/896726.pem
	I1007 11:05:27.973039  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/files/etc/ssl/certs/8967262.pem -> /usr/share/ca-certificates/8967262.pem
	I1007 11:05:27.973053  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:05:27.973659  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 11:05:28.010924  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 11:05:28.039024  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 11:05:28.064554  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 11:05:28.089870  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1007 11:05:28.114693  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 11:05:28.139261  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 11:05:28.163243  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 11:05:28.187646  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/certs/896726.pem --> /usr/share/ca-certificates/896726.pem (1338 bytes)
	I1007 11:05:28.211676  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/files/etc/ssl/certs/8967262.pem --> /usr/share/ca-certificates/8967262.pem (1708 bytes)
	I1007 11:05:28.234981  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 11:05:28.258922  962848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 11:05:28.276511  962848 ssh_runner.go:195] Run: openssl version
	I1007 11:05:28.282134  962848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8967262.pem && ln -fs /usr/share/ca-certificates/8967262.pem /etc/ssl/certs/8967262.pem"
	I1007 11:05:28.291224  962848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8967262.pem
	I1007 11:05:28.294652  962848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:53 /usr/share/ca-certificates/8967262.pem
	I1007 11:05:28.294721  962848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8967262.pem
	I1007 11:05:28.301456  962848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8967262.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 11:05:28.310390  962848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 11:05:28.319439  962848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:05:28.322758  962848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:05:28.322828  962848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:05:28.329991  962848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 11:05:28.338506  962848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/896726.pem && ln -fs /usr/share/ca-certificates/896726.pem /etc/ssl/certs/896726.pem"
	I1007 11:05:28.347700  962848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/896726.pem
	I1007 11:05:28.350957  962848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:53 /usr/share/ca-certificates/896726.pem
	I1007 11:05:28.351018  962848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/896726.pem
	I1007 11:05:28.359326  962848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/896726.pem /etc/ssl/certs/51391683.0"
	I1007 11:05:28.368585  962848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 11:05:28.372074  962848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 11:05:28.378976  962848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 11:05:28.386631  962848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 11:05:28.393422  962848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 11:05:28.400173  962848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 11:05:28.406986  962848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 11:05:28.413783  962848 kubeadm.go:392] StartCluster: {Name:ha-685971 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-685971 Namespace:default APIServerHAVIP:192.168.58.254 APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.58.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metal
lb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:05:28.413913  962848 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 11:05:28.413997  962848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 11:05:28.455189  962848 cri.go:89] found id: ""
	I1007 11:05:28.455259  962848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 11:05:28.464155  962848 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 11:05:28.464174  962848 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 11:05:28.464233  962848 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 11:05:28.472839  962848 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 11:05:28.473249  962848 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-685971" does not appear in /home/jenkins/minikube-integration/19761-891319/kubeconfig
	I1007 11:05:28.473360  962848 kubeconfig.go:62] /home/jenkins/minikube-integration/19761-891319/kubeconfig needs updating (will repair): [kubeconfig missing "ha-685971" cluster setting kubeconfig missing "ha-685971" context setting]
	I1007 11:05:28.473675  962848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/kubeconfig: {Name:mk44557a7348260d019750a5a9dae3060b2fe543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:05:28.474063  962848 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19761-891319/kubeconfig
	I1007 11:05:28.474348  962848 kapi.go:59] client config for ha-685971: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/client.crt", KeyFile:"/home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/client.key", CAFile:"/home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e94a20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 11:05:28.475000  962848 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 11:05:28.475086  962848 cert_rotation.go:140] Starting client certificate rotation controller
	I1007 11:05:28.484043  962848 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.58.2
	I1007 11:05:28.484115  962848 kubeadm.go:597] duration metric: took 19.933287ms to restartPrimaryControlPlane
	I1007 11:05:28.484140  962848 kubeadm.go:394] duration metric: took 70.37531ms to StartCluster
	I1007 11:05:28.484179  962848 settings.go:142] acquiring lock: {Name:mka20a3e6b00d8e089bb672b1d6ff1f77b6f764a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:05:28.484269  962848 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19761-891319/kubeconfig
	I1007 11:05:28.484857  962848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/kubeconfig: {Name:mk44557a7348260d019750a5a9dae3060b2fe543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:05:28.485056  962848 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 11:05:28.485082  962848 start.go:241] waiting for startup goroutines ...
	I1007 11:05:28.485090  962848 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 11:05:28.485479  962848 config.go:182] Loaded profile config "ha-685971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:05:28.489785  962848 out.go:177] * Enabled addons: 
	I1007 11:05:28.491637  962848 addons.go:510] duration metric: took 6.54459ms for enable addons: enabled=[]
	I1007 11:05:28.491673  962848 start.go:246] waiting for cluster config update ...
	I1007 11:05:28.491682  962848 start.go:255] writing updated cluster config ...
	I1007 11:05:28.493904  962848 out.go:201] 
	I1007 11:05:28.495890  962848 config.go:182] Loaded profile config "ha-685971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:05:28.496044  962848 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/config.json ...
	I1007 11:05:28.498120  962848 out.go:177] * Starting "ha-685971-m02" control-plane node in "ha-685971" cluster
	I1007 11:05:28.500072  962848 cache.go:121] Beginning downloading kic base image for docker with crio
	I1007 11:05:28.501975  962848 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 11:05:28.503415  962848 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:05:28.503460  962848 cache.go:56] Caching tarball of preloaded images
	I1007 11:05:28.503506  962848 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 11:05:28.503577  962848 preload.go:172] Found /home/jenkins/minikube-integration/19761-891319/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1007 11:05:28.503595  962848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 11:05:28.503746  962848 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/config.json ...
	I1007 11:05:28.521785  962848 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1007 11:05:28.521818  962848 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1007 11:05:28.521849  962848 cache.go:194] Successfully downloaded all kic artifacts
	I1007 11:05:28.521893  962848 start.go:360] acquireMachinesLock for ha-685971-m02: {Name:mk48cd50891cb45b664fc00597eeda607dab1e57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:05:28.521976  962848 start.go:364] duration metric: took 63.786µs to acquireMachinesLock for "ha-685971-m02"
	I1007 11:05:28.522013  962848 start.go:96] Skipping create...Using existing machine configuration
	I1007 11:05:28.522028  962848 fix.go:54] fixHost starting: m02
	I1007 11:05:28.522430  962848 cli_runner.go:164] Run: docker container inspect ha-685971-m02 --format={{.State.Status}}
	I1007 11:05:28.538170  962848 fix.go:112] recreateIfNeeded on ha-685971-m02: state=Stopped err=<nil>
	W1007 11:05:28.538204  962848 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 11:05:28.540604  962848 out.go:177] * Restarting existing docker container for "ha-685971-m02" ...
	I1007 11:05:28.542759  962848 cli_runner.go:164] Run: docker start ha-685971-m02
	I1007 11:05:28.834478  962848 cli_runner.go:164] Run: docker container inspect ha-685971-m02 --format={{.State.Status}}
	I1007 11:05:28.857664  962848 kic.go:430] container "ha-685971-m02" state is running.
	I1007 11:05:28.858021  962848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-685971-m02
	I1007 11:05:28.879585  962848 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/config.json ...
	I1007 11:05:28.879905  962848 machine.go:93] provisionDockerMachine start ...
	I1007 11:05:28.879975  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971-m02
	I1007 11:05:28.897749  962848 main.go:141] libmachine: Using SSH client type: native
	I1007 11:05:28.897996  962848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33946 <nil> <nil>}
	I1007 11:05:28.898005  962848 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 11:05:28.898645  962848 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48318->127.0.0.1:33946: read: connection reset by peer
	I1007 11:05:32.080411  962848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-685971-m02
	
	I1007 11:05:32.080486  962848 ubuntu.go:169] provisioning hostname "ha-685971-m02"
	I1007 11:05:32.080589  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971-m02
	I1007 11:05:32.112500  962848 main.go:141] libmachine: Using SSH client type: native
	I1007 11:05:32.112749  962848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33946 <nil> <nil>}
	I1007 11:05:32.112761  962848 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-685971-m02 && echo "ha-685971-m02" | sudo tee /etc/hostname
	I1007 11:05:32.346194  962848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-685971-m02
	
	I1007 11:05:32.346275  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971-m02
	I1007 11:05:32.368449  962848 main.go:141] libmachine: Using SSH client type: native
	I1007 11:05:32.368707  962848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33946 <nil> <nil>}
	I1007 11:05:32.368731  962848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-685971-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-685971-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-685971-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 11:05:32.561624  962848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 11:05:32.561699  962848 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19761-891319/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-891319/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-891319/.minikube}
	I1007 11:05:32.561730  962848 ubuntu.go:177] setting up certificates
	I1007 11:05:32.561775  962848 provision.go:84] configureAuth start
	I1007 11:05:32.561893  962848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-685971-m02
	I1007 11:05:32.589469  962848 provision.go:143] copyHostCerts
	I1007 11:05:32.589509  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-891319/.minikube/key.pem
	I1007 11:05:32.589540  962848 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-891319/.minikube/key.pem, removing ...
	I1007 11:05:32.589547  962848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-891319/.minikube/key.pem
	I1007 11:05:32.589620  962848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-891319/.minikube/key.pem (1679 bytes)
	I1007 11:05:32.589695  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-891319/.minikube/ca.pem
	I1007 11:05:32.589711  962848 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-891319/.minikube/ca.pem, removing ...
	I1007 11:05:32.589715  962848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-891319/.minikube/ca.pem
	I1007 11:05:32.589740  962848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-891319/.minikube/ca.pem (1078 bytes)
	I1007 11:05:32.589822  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-891319/.minikube/cert.pem
	I1007 11:05:32.589838  962848 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-891319/.minikube/cert.pem, removing ...
	I1007 11:05:32.589843  962848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-891319/.minikube/cert.pem
	I1007 11:05:32.589866  962848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-891319/.minikube/cert.pem (1123 bytes)
	I1007 11:05:32.589909  962848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca-key.pem org=jenkins.ha-685971-m02 san=[127.0.0.1 192.168.58.3 ha-685971-m02 localhost minikube]
	I1007 11:05:33.001547  962848 provision.go:177] copyRemoteCerts
	I1007 11:05:33.001708  962848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 11:05:33.001777  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971-m02
	I1007 11:05:33.026105  962848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33946 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/ha-685971-m02/id_rsa Username:docker}
	I1007 11:05:33.146529  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 11:05:33.146596  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 11:05:33.197660  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 11:05:33.197722  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 11:05:33.258342  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 11:05:33.258406  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1007 11:05:33.318541  962848 provision.go:87] duration metric: took 756.721497ms to configureAuth
	I1007 11:05:33.318569  962848 ubuntu.go:193] setting minikube options for container-runtime
	I1007 11:05:33.318799  962848 config.go:182] Loaded profile config "ha-685971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:05:33.318905  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971-m02
	I1007 11:05:33.350208  962848 main.go:141] libmachine: Using SSH client type: native
	I1007 11:05:33.350501  962848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33946 <nil> <nil>}
	I1007 11:05:33.350518  962848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 11:05:33.782292  962848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 11:05:33.782316  962848 machine.go:96] duration metric: took 4.902399013s to provisionDockerMachine
	I1007 11:05:33.782334  962848 start.go:293] postStartSetup for "ha-685971-m02" (driver="docker")
	I1007 11:05:33.782393  962848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 11:05:33.782468  962848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 11:05:33.782514  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971-m02
	I1007 11:05:33.801415  962848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33946 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/ha-685971-m02/id_rsa Username:docker}
	I1007 11:05:33.927820  962848 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 11:05:33.941242  962848 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 11:05:33.941276  962848 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 11:05:33.941287  962848 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 11:05:33.941295  962848 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 11:05:33.941306  962848 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-891319/.minikube/addons for local assets ...
	I1007 11:05:33.941364  962848 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-891319/.minikube/files for local assets ...
	I1007 11:05:33.941439  962848 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-891319/.minikube/files/etc/ssl/certs/8967262.pem -> 8967262.pem in /etc/ssl/certs
	I1007 11:05:33.941446  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/files/etc/ssl/certs/8967262.pem -> /etc/ssl/certs/8967262.pem
	I1007 11:05:33.941555  962848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 11:05:34.003516  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/files/etc/ssl/certs/8967262.pem --> /etc/ssl/certs/8967262.pem (1708 bytes)
	I1007 11:05:34.070084  962848 start.go:296] duration metric: took 287.68703ms for postStartSetup
	I1007 11:05:34.070212  962848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 11:05:34.070289  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971-m02
	I1007 11:05:34.096404  962848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33946 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/ha-685971-m02/id_rsa Username:docker}
	I1007 11:05:34.230418  962848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 11:05:34.251527  962848 fix.go:56] duration metric: took 5.729496492s for fixHost
	I1007 11:05:34.251550  962848 start.go:83] releasing machines lock for "ha-685971-m02", held for 5.729561485s
	I1007 11:05:34.251616  962848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-685971-m02
	I1007 11:05:34.285750  962848 out.go:177] * Found network options:
	I1007 11:05:34.288749  962848 out.go:177]   - NO_PROXY=192.168.58.2
	W1007 11:05:34.290748  962848 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 11:05:34.290792  962848 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 11:05:34.290863  962848 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 11:05:34.290906  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971-m02
	I1007 11:05:34.290925  962848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 11:05:34.290997  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971-m02
	I1007 11:05:34.328599  962848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33946 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/ha-685971-m02/id_rsa Username:docker}
	I1007 11:05:34.331199  962848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33946 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/ha-685971-m02/id_rsa Username:docker}
	I1007 11:05:34.787675  962848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 11:05:34.808097  962848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 11:05:34.829366  962848 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1007 11:05:34.829443  962848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 11:05:34.877074  962848 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 11:05:34.877099  962848 start.go:495] detecting cgroup driver to use...
	I1007 11:05:34.877131  962848 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 11:05:34.877186  962848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 11:05:34.936432  962848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 11:05:34.977615  962848 docker.go:217] disabling cri-docker service (if available) ...
	I1007 11:05:34.977679  962848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 11:05:35.032909  962848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 11:05:35.075552  962848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 11:05:35.366490  962848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 11:05:35.664847  962848 docker.go:233] disabling docker service ...
	I1007 11:05:35.664971  962848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 11:05:35.727046  962848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 11:05:35.766813  962848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 11:05:36.073777  962848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 11:05:36.365454  962848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 11:05:36.410876  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 11:05:36.508670  962848 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 11:05:36.508792  962848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:05:36.539046  962848 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 11:05:36.539177  962848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:05:36.564036  962848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:05:36.590834  962848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:05:36.647247  962848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 11:05:36.716910  962848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:05:36.763199  962848 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:05:36.810591  962848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:05:36.868786  962848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 11:05:36.910704  962848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 11:05:36.945639  962848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:05:37.240718  962848 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 11:05:38.717749  962848 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.476933313s)
	I1007 11:05:38.717835  962848 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 11:05:38.717913  962848 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 11:05:38.727772  962848 start.go:563] Will wait 60s for crictl version
	I1007 11:05:38.727890  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:05:38.732891  962848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 11:05:38.821177  962848 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1007 11:05:38.821333  962848 ssh_runner.go:195] Run: crio --version
	I1007 11:05:38.917636  962848 ssh_runner.go:195] Run: crio --version
	I1007 11:05:39.016455  962848 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1007 11:05:39.019004  962848 out.go:177]   - env NO_PROXY=192.168.58.2
	I1007 11:05:39.021549  962848 cli_runner.go:164] Run: docker network inspect ha-685971 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 11:05:39.050324  962848 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1007 11:05:39.066261  962848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 11:05:39.083636  962848 mustload.go:65] Loading cluster: ha-685971
	I1007 11:05:39.083905  962848 config.go:182] Loaded profile config "ha-685971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:05:39.084181  962848 cli_runner.go:164] Run: docker container inspect ha-685971 --format={{.State.Status}}
	I1007 11:05:39.128439  962848 host.go:66] Checking if "ha-685971" exists ...
	I1007 11:05:39.128734  962848 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971 for IP: 192.168.58.3
	I1007 11:05:39.128741  962848 certs.go:194] generating shared ca certs ...
	I1007 11:05:39.128785  962848 certs.go:226] acquiring lock for ca certs: {Name:mkd5251b1f18df70f58bf1f19694372431d4d649 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:05:39.128904  962848 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-891319/.minikube/ca.key
	I1007 11:05:39.128960  962848 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.key
	I1007 11:05:39.128967  962848 certs.go:256] generating profile certs ...
	I1007 11:05:39.129045  962848 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/client.key
	I1007 11:05:39.129107  962848 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/apiserver.key.dae55327
	I1007 11:05:39.129151  962848 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/proxy-client.key
	I1007 11:05:39.129160  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 11:05:39.129173  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 11:05:39.129186  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 11:05:39.129196  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 11:05:39.129207  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 11:05:39.129220  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 11:05:39.129231  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 11:05:39.129243  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 11:05:39.129292  962848 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/896726.pem (1338 bytes)
	W1007 11:05:39.129319  962848 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-891319/.minikube/certs/896726_empty.pem, impossibly tiny 0 bytes
	I1007 11:05:39.129326  962848 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 11:05:39.129351  962848 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem (1078 bytes)
	I1007 11:05:39.129377  962848 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/cert.pem (1123 bytes)
	I1007 11:05:39.129397  962848 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/key.pem (1679 bytes)
	I1007 11:05:39.129439  962848 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/files/etc/ssl/certs/8967262.pem (1708 bytes)
	I1007 11:05:39.129467  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/896726.pem -> /usr/share/ca-certificates/896726.pem
	I1007 11:05:39.129481  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/files/etc/ssl/certs/8967262.pem -> /usr/share/ca-certificates/8967262.pem
	I1007 11:05:39.129492  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:05:39.129548  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971
	I1007 11:05:39.155336  962848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33941 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/ha-685971/id_rsa Username:docker}
	I1007 11:05:39.260550  962848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 11:05:39.271183  962848 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 11:05:39.297336  962848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 11:05:39.304110  962848 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1007 11:05:39.316843  962848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 11:05:39.320766  962848 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 11:05:39.334033  962848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 11:05:39.339045  962848 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1007 11:05:39.352766  962848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 11:05:39.366846  962848 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 11:05:39.401070  962848 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 11:05:39.408293  962848 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1007 11:05:39.438807  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 11:05:39.485964  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 11:05:39.524703  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 11:05:39.566196  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 11:05:39.591304  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1007 11:05:39.616687  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 11:05:39.640783  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 11:05:39.665402  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 11:05:39.702175  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/certs/896726.pem --> /usr/share/ca-certificates/896726.pem (1338 bytes)
	I1007 11:05:39.751703  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/files/etc/ssl/certs/8967262.pem --> /usr/share/ca-certificates/8967262.pem (1708 bytes)
	I1007 11:05:39.787884  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 11:05:39.833822  962848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 11:05:39.866615  962848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1007 11:05:39.892297  962848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 11:05:39.946354  962848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1007 11:05:39.969913  962848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 11:05:39.998412  962848 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1007 11:05:40.029990  962848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 11:05:40.064338  962848 ssh_runner.go:195] Run: openssl version
	I1007 11:05:40.075476  962848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/896726.pem && ln -fs /usr/share/ca-certificates/896726.pem /etc/ssl/certs/896726.pem"
	I1007 11:05:40.087113  962848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/896726.pem
	I1007 11:05:40.096574  962848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:53 /usr/share/ca-certificates/896726.pem
	I1007 11:05:40.096737  962848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/896726.pem
	I1007 11:05:40.108974  962848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/896726.pem /etc/ssl/certs/51391683.0"
	I1007 11:05:40.119694  962848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8967262.pem && ln -fs /usr/share/ca-certificates/8967262.pem /etc/ssl/certs/8967262.pem"
	I1007 11:05:40.143942  962848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8967262.pem
	I1007 11:05:40.148598  962848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:53 /usr/share/ca-certificates/8967262.pem
	I1007 11:05:40.148743  962848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8967262.pem
	I1007 11:05:40.159783  962848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8967262.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 11:05:40.171423  962848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 11:05:40.182977  962848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:05:40.187431  962848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:05:40.187554  962848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:05:40.197174  962848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 11:05:40.207523  962848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 11:05:40.211863  962848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 11:05:40.219675  962848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 11:05:40.228066  962848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 11:05:40.241305  962848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 11:05:40.252202  962848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 11:05:40.259692  962848 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 11:05:40.268990  962848 kubeadm.go:934] updating node {m02 192.168.58.3 8443 v1.31.1 crio true true} ...
	I1007 11:05:40.269158  962848 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-685971-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-685971 Namespace:default APIServerHAVIP:192.168.58.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 11:05:40.269207  962848 kube-vip.go:115] generating kube-vip config ...
	I1007 11:05:40.269294  962848 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1007 11:05:40.287123  962848 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 11:05:40.287238  962848 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.58.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 11:05:40.287327  962848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 11:05:40.303891  962848 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 11:05:40.304015  962848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 11:05:40.319608  962848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1007 11:05:40.347998  962848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 11:05:40.376521  962848 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 11:05:40.410151  962848 ssh_runner.go:195] Run: grep 192.168.58.254	control-plane.minikube.internal$ /etc/hosts
	I1007 11:05:40.414056  962848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 11:05:40.429403  962848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:05:40.637943  962848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:05:40.658395  962848 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 11:05:40.658739  962848 config.go:182] Loaded profile config "ha-685971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:05:40.662194  962848 out.go:177] * Verifying Kubernetes components...
	I1007 11:05:40.664415  962848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:05:40.815081  962848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:05:40.829572  962848 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19761-891319/kubeconfig
	I1007 11:05:40.829845  962848 kapi.go:59] client config for ha-685971: &rest.Config{Host:"https://192.168.58.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/client.crt", KeyFile:"/home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/client.key", CAFile:"/home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e94a20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 11:05:40.829904  962848 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.58.254:8443 with https://192.168.58.2:8443
	I1007 11:05:40.830151  962848 node_ready.go:35] waiting up to 6m0s for node "ha-685971-m02" to be "Ready" ...
	I1007 11:05:40.830224  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m02
	I1007 11:05:40.830230  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:40.830238  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:40.830242  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:51.577474  962848 round_trippers.go:574] Response Status: 500 Internal Server Error in 10747 milliseconds
	I1007 11:05:51.578158  962848 node_ready.go:53] error getting node "ha-685971-m02": etcdserver: request timed out
	I1007 11:05:51.578228  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m02
	I1007 11:05:51.578239  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:51.578247  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:51.578252  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:57.000011  962848 round_trippers.go:574] Response Status: 200 OK in 5421 milliseconds
	I1007 11:05:57.003282  962848 node_ready.go:49] node "ha-685971-m02" has status "Ready":"True"
	I1007 11:05:57.003323  962848 node_ready.go:38] duration metric: took 16.173158132s for node "ha-685971-m02" to be "Ready" ...
	I1007 11:05:57.003337  962848 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 11:05:57.003386  962848 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 11:05:57.003408  962848 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 11:05:57.003476  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1007 11:05:57.003487  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:57.003495  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:57.003499  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:57.036613  962848 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I1007 11:05:57.049538  962848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-b5fbm" in "kube-system" namespace to be "Ready" ...
	I1007 11:05:57.049655  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:05:57.049665  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:57.049674  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:57.049686  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:57.052523  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:05:57.053551  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:05:57.053572  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:57.053580  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:57.053587  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:57.056213  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:05:57.056791  962848 pod_ready.go:93] pod "coredns-7c65d6cfc9-b5fbm" in "kube-system" namespace has status "Ready":"True"
	I1007 11:05:57.056806  962848 pod_ready.go:82] duration metric: took 7.228171ms for pod "coredns-7c65d6cfc9-b5fbm" in "kube-system" namespace to be "Ready" ...
	I1007 11:05:57.056817  962848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-z86x9" in "kube-system" namespace to be "Ready" ...
	I1007 11:05:57.056882  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-z86x9
	I1007 11:05:57.056888  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:57.056896  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:57.056902  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:57.060581  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:05:57.061344  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:05:57.061365  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:57.061374  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:57.061381  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:57.064071  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:05:57.064878  962848 pod_ready.go:93] pod "coredns-7c65d6cfc9-z86x9" in "kube-system" namespace has status "Ready":"True"
	I1007 11:05:57.064905  962848 pod_ready.go:82] duration metric: took 8.079989ms for pod "coredns-7c65d6cfc9-z86x9" in "kube-system" namespace to be "Ready" ...
	I1007 11:05:57.064918  962848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-685971" in "kube-system" namespace to be "Ready" ...
	I1007 11:05:57.064986  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685971
	I1007 11:05:57.064996  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:57.065005  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:57.065011  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:57.067817  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:05:57.068585  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:05:57.068605  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:57.068614  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:57.068619  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:57.071189  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:05:57.071771  962848 pod_ready.go:93] pod "etcd-ha-685971" in "kube-system" namespace has status "Ready":"True"
	I1007 11:05:57.071801  962848 pod_ready.go:82] duration metric: took 6.873548ms for pod "etcd-ha-685971" in "kube-system" namespace to be "Ready" ...
	I1007 11:05:57.071814  962848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-685971-m02" in "kube-system" namespace to be "Ready" ...
	I1007 11:05:57.071896  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685971-m02
	I1007 11:05:57.071906  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:57.071914  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:57.071918  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:57.074705  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:05:57.075388  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m02
	I1007 11:05:57.075405  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:57.075413  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:57.075427  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:57.077937  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:05:57.078544  962848 pod_ready.go:93] pod "etcd-ha-685971-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 11:05:57.078565  962848 pod_ready.go:82] duration metric: took 6.743981ms for pod "etcd-ha-685971-m02" in "kube-system" namespace to be "Ready" ...
	I1007 11:05:57.078585  962848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-685971-m03" in "kube-system" namespace to be "Ready" ...
	I1007 11:05:57.078664  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685971-m03
	I1007 11:05:57.078673  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:57.078680  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:57.078685  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:57.081393  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:05:57.204292  962848 request.go:632] Waited for 122.229738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971-m03
	I1007 11:05:57.204360  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m03
	I1007 11:05:57.204366  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:57.204376  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:57.204386  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:57.206832  962848 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1007 11:05:57.206953  962848 pod_ready.go:98] node "ha-685971-m03" hosting pod "etcd-ha-685971-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-685971-m03": nodes "ha-685971-m03" not found
	I1007 11:05:57.206973  962848 pod_ready.go:82] duration metric: took 128.377965ms for pod "etcd-ha-685971-m03" in "kube-system" namespace to be "Ready" ...
	E1007 11:05:57.206984  962848 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-685971-m03" hosting pod "etcd-ha-685971-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-685971-m03": nodes "ha-685971-m03" not found
	I1007 11:05:57.207008  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-685971" in "kube-system" namespace to be "Ready" ...
	I1007 11:05:57.404481  962848 request.go:632] Waited for 197.378843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685971
	I1007 11:05:57.404559  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685971
	I1007 11:05:57.404568  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:57.404580  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:57.404585  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:57.407671  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:05:57.603905  962848 request.go:632] Waited for 195.312698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:05:57.603978  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:05:57.603985  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:57.603992  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:57.604000  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:57.606589  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:05:57.607164  962848 pod_ready.go:93] pod "kube-apiserver-ha-685971" in "kube-system" namespace has status "Ready":"True"
	I1007 11:05:57.607187  962848 pod_ready.go:82] duration metric: took 400.166283ms for pod "kube-apiserver-ha-685971" in "kube-system" namespace to be "Ready" ...
	I1007 11:05:57.607201  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-685971-m02" in "kube-system" namespace to be "Ready" ...
	I1007 11:05:57.803992  962848 request.go:632] Waited for 196.717382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685971-m02
	I1007 11:05:57.804066  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685971-m02
	I1007 11:05:57.804077  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:57.804124  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:57.804135  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:57.807171  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:05:58.004473  962848 request.go:632] Waited for 196.146933ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971-m02
	I1007 11:05:58.004612  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m02
	I1007 11:05:58.004619  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:58.004628  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:58.004634  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:58.012942  962848 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 11:05:58.013585  962848 pod_ready.go:93] pod "kube-apiserver-ha-685971-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 11:05:58.013607  962848 pod_ready.go:82] duration metric: took 406.394288ms for pod "kube-apiserver-ha-685971-m02" in "kube-system" namespace to be "Ready" ...
	I1007 11:05:58.013622  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-685971-m03" in "kube-system" namespace to be "Ready" ...
	I1007 11:05:58.204430  962848 request.go:632] Waited for 190.731066ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685971-m03
	I1007 11:05:58.204523  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685971-m03
	I1007 11:05:58.204551  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:58.204566  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:58.204571  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:58.213006  962848 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 11:05:58.404114  962848 request.go:632] Waited for 190.308054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971-m03
	I1007 11:05:58.404185  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m03
	I1007 11:05:58.404195  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:58.404225  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:58.404236  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:58.406833  962848 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1007 11:05:58.406960  962848 pod_ready.go:98] node "ha-685971-m03" hosting pod "kube-apiserver-ha-685971-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-685971-m03": nodes "ha-685971-m03" not found
	I1007 11:05:58.406984  962848 pod_ready.go:82] duration metric: took 393.354955ms for pod "kube-apiserver-ha-685971-m03" in "kube-system" namespace to be "Ready" ...
	E1007 11:05:58.407001  962848 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-685971-m03" hosting pod "kube-apiserver-ha-685971-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-685971-m03": nodes "ha-685971-m03" not found
	I1007 11:05:58.407011  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-685971" in "kube-system" namespace to be "Ready" ...
	I1007 11:05:58.604301  962848 request.go:632] Waited for 197.164125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685971
	I1007 11:05:58.604375  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685971
	I1007 11:05:58.604386  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:58.604395  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:58.604406  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:58.625915  962848 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1007 11:05:58.803890  962848 request.go:632] Waited for 177.277868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:05:58.803966  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:05:58.803977  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:58.803986  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:58.804003  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:58.806972  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:05:58.807508  962848 pod_ready.go:93] pod "kube-controller-manager-ha-685971" in "kube-system" namespace has status "Ready":"True"
	I1007 11:05:58.807533  962848 pod_ready.go:82] duration metric: took 400.507902ms for pod "kube-controller-manager-ha-685971" in "kube-system" namespace to be "Ready" ...
	I1007 11:05:58.807550  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-685971-m02" in "kube-system" namespace to be "Ready" ...
	I1007 11:05:59.003656  962848 request.go:632] Waited for 196.032095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685971-m02
	I1007 11:05:59.003746  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685971-m02
	I1007 11:05:59.004391  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:59.004403  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:59.004408  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:59.009516  962848 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 11:05:59.203565  962848 request.go:632] Waited for 193.188905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971-m02
	I1007 11:05:59.203678  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m02
	I1007 11:05:59.203735  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:59.203763  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:59.203791  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:59.207177  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:05:59.208169  962848 pod_ready.go:93] pod "kube-controller-manager-ha-685971-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 11:05:59.208211  962848 pod_ready.go:82] duration metric: took 400.648496ms for pod "kube-controller-manager-ha-685971-m02" in "kube-system" namespace to be "Ready" ...
	I1007 11:05:59.208269  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-685971-m03" in "kube-system" namespace to be "Ready" ...
	I1007 11:05:59.404292  962848 request.go:632] Waited for 195.886709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685971-m03
	I1007 11:05:59.404436  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685971-m03
	I1007 11:05:59.404476  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:59.404506  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:59.404529  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:59.409231  962848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 11:05:59.603885  962848 request.go:632] Waited for 193.180938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971-m03
	I1007 11:05:59.604001  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m03
	I1007 11:05:59.604038  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:59.604077  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:59.604099  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:59.607132  962848 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1007 11:05:59.607530  962848 pod_ready.go:98] node "ha-685971-m03" hosting pod "kube-controller-manager-ha-685971-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-685971-m03": nodes "ha-685971-m03" not found
	I1007 11:05:59.607586  962848 pod_ready.go:82] duration metric: took 399.289227ms for pod "kube-controller-manager-ha-685971-m03" in "kube-system" namespace to be "Ready" ...
	E1007 11:05:59.607628  962848 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-685971-m03" hosting pod "kube-controller-manager-ha-685971-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-685971-m03": nodes "ha-685971-m03" not found
	I1007 11:05:59.607655  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4frrm" in "kube-system" namespace to be "Ready" ...
	I1007 11:05:59.804052  962848 request.go:632] Waited for 196.297751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4frrm
	I1007 11:05:59.804170  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4frrm
	I1007 11:05:59.804219  962848 round_trippers.go:469] Request Headers:
	I1007 11:05:59.804290  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:05:59.804298  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:05:59.810462  962848 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 11:06:00.003602  962848 request.go:632] Waited for 191.22397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971-m02
	I1007 11:06:00.003669  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m02
	I1007 11:06:00.003675  962848 round_trippers.go:469] Request Headers:
	I1007 11:06:00.003684  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:06:00.003689  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:06:00.010313  962848 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 11:06:00.011708  962848 pod_ready.go:93] pod "kube-proxy-4frrm" in "kube-system" namespace has status "Ready":"True"
	I1007 11:06:00.011734  962848 pod_ready.go:82] duration metric: took 404.056154ms for pod "kube-proxy-4frrm" in "kube-system" namespace to be "Ready" ...
	I1007 11:06:00.011749  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-787s7" in "kube-system" namespace to be "Ready" ...
	I1007 11:06:00.204109  962848 request.go:632] Waited for 192.279135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-787s7
	I1007 11:06:00.204263  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-787s7
	I1007 11:06:00.204293  962848 round_trippers.go:469] Request Headers:
	I1007 11:06:00.204318  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:06:00.204355  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:06:00.207521  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:06:00.403887  962848 request.go:632] Waited for 189.34445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:06:00.404021  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:06:00.404063  962848 round_trippers.go:469] Request Headers:
	I1007 11:06:00.404093  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:06:00.404118  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:06:00.407923  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:06:00.408651  962848 pod_ready.go:93] pod "kube-proxy-787s7" in "kube-system" namespace has status "Ready":"True"
	I1007 11:06:00.408723  962848 pod_ready.go:82] duration metric: took 396.963096ms for pod "kube-proxy-787s7" in "kube-system" namespace to be "Ready" ...
	I1007 11:06:00.408756  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8ntnf" in "kube-system" namespace to be "Ready" ...
	I1007 11:06:00.604340  962848 request.go:632] Waited for 195.48174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8ntnf
	I1007 11:06:00.604470  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8ntnf
	I1007 11:06:00.604559  962848 round_trippers.go:469] Request Headers:
	I1007 11:06:00.604599  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:06:00.604622  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:06:00.608580  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:06:00.803576  962848 request.go:632] Waited for 194.249419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971-m03
	I1007 11:06:00.803695  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m03
	I1007 11:06:00.803716  962848 round_trippers.go:469] Request Headers:
	I1007 11:06:00.803786  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:06:00.803804  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:06:00.806415  962848 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1007 11:06:00.806808  962848 pod_ready.go:98] node "ha-685971-m03" hosting pod "kube-proxy-8ntnf" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-685971-m03": nodes "ha-685971-m03" not found
	I1007 11:06:00.806853  962848 pod_ready.go:82] duration metric: took 398.057956ms for pod "kube-proxy-8ntnf" in "kube-system" namespace to be "Ready" ...
	E1007 11:06:00.806898  962848 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-685971-m03" hosting pod "kube-proxy-8ntnf" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-685971-m03": nodes "ha-685971-m03" not found
	I1007 11:06:00.806920  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bdxpj" in "kube-system" namespace to be "Ready" ...
	I1007 11:06:01.004033  962848 request.go:632] Waited for 196.97261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bdxpj
	I1007 11:06:01.004222  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bdxpj
	I1007 11:06:01.004279  962848 round_trippers.go:469] Request Headers:
	I1007 11:06:01.004312  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:06:01.004337  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:06:01.007889  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:06:01.204305  962848 request.go:632] Waited for 195.331873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971-m04
	I1007 11:06:01.204434  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m04
	I1007 11:06:01.204478  962848 round_trippers.go:469] Request Headers:
	I1007 11:06:01.204502  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:06:01.204523  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:06:01.207729  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:06:01.208990  962848 pod_ready.go:93] pod "kube-proxy-bdxpj" in "kube-system" namespace has status "Ready":"True"
	I1007 11:06:01.209056  962848 pod_ready.go:82] duration metric: took 402.096618ms for pod "kube-proxy-bdxpj" in "kube-system" namespace to be "Ready" ...
	I1007 11:06:01.209084  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-685971" in "kube-system" namespace to be "Ready" ...
	I1007 11:06:01.403953  962848 request.go:632] Waited for 194.774372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685971
	I1007 11:06:01.404052  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685971
	I1007 11:06:01.404079  962848 round_trippers.go:469] Request Headers:
	I1007 11:06:01.404095  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:06:01.404100  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:06:01.407194  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:06:01.603550  962848 request.go:632] Waited for 195.276268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:06:01.603638  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:06:01.603713  962848 round_trippers.go:469] Request Headers:
	I1007 11:06:01.603739  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:06:01.603744  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:06:01.607036  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:06:01.607713  962848 pod_ready.go:93] pod "kube-scheduler-ha-685971" in "kube-system" namespace has status "Ready":"True"
	I1007 11:06:01.607734  962848 pod_ready.go:82] duration metric: took 398.628865ms for pod "kube-scheduler-ha-685971" in "kube-system" namespace to be "Ready" ...
	I1007 11:06:01.607747  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-685971-m02" in "kube-system" namespace to be "Ready" ...
	I1007 11:06:01.804444  962848 request.go:632] Waited for 196.613982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685971-m02
	I1007 11:06:01.804518  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685971-m02
	I1007 11:06:01.804527  962848 round_trippers.go:469] Request Headers:
	I1007 11:06:01.804536  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:06:01.804586  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:06:01.807286  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:06:02.004451  962848 request.go:632] Waited for 196.572629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971-m02
	I1007 11:06:02.004563  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m02
	I1007 11:06:02.004576  962848 round_trippers.go:469] Request Headers:
	I1007 11:06:02.004585  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:06:02.004598  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:06:02.011590  962848 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 11:06:02.012740  962848 pod_ready.go:93] pod "kube-scheduler-ha-685971-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 11:06:02.012767  962848 pod_ready.go:82] duration metric: took 405.002577ms for pod "kube-scheduler-ha-685971-m02" in "kube-system" namespace to be "Ready" ...
	I1007 11:06:02.012781  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-685971-m03" in "kube-system" namespace to be "Ready" ...
	I1007 11:06:02.203562  962848 request.go:632] Waited for 190.697483ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685971-m03
	I1007 11:06:02.203622  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685971-m03
	I1007 11:06:02.203628  962848 round_trippers.go:469] Request Headers:
	I1007 11:06:02.203637  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:06:02.203645  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:06:02.206751  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:06:02.404060  962848 request.go:632] Waited for 196.153128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971-m03
	I1007 11:06:02.404164  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m03
	I1007 11:06:02.404187  962848 round_trippers.go:469] Request Headers:
	I1007 11:06:02.404280  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:06:02.404322  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:06:02.409004  962848 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I1007 11:06:02.409350  962848 pod_ready.go:98] node "ha-685971-m03" hosting pod "kube-scheduler-ha-685971-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-685971-m03": nodes "ha-685971-m03" not found
	I1007 11:06:02.409391  962848 pod_ready.go:82] duration metric: took 396.596428ms for pod "kube-scheduler-ha-685971-m03" in "kube-system" namespace to be "Ready" ...
	E1007 11:06:02.409430  962848 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-685971-m03" hosting pod "kube-scheduler-ha-685971-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-685971-m03": nodes "ha-685971-m03" not found
	I1007 11:06:02.409459  962848 pod_ready.go:39] duration metric: took 5.406106819s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 11:06:02.409505  962848 api_server.go:52] waiting for apiserver process to appear ...
	I1007 11:06:02.409604  962848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 11:06:02.436525  962848 api_server.go:72] duration metric: took 21.777753399s to wait for apiserver process to appear ...
	I1007 11:06:02.436553  962848 api_server.go:88] waiting for apiserver healthz status ...
	I1007 11:06:02.436586  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:02.452187  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:02.452357  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:02.937019  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:02.949118  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:02.949210  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:03.437382  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:03.447845  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:03.447875  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:03.937593  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:03.947137  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:03.947167  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:04.437667  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:04.446140  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:04.446164  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:04.936915  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:04.944743  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:04.944776  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:05.437292  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:05.445025  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:05.445052  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:05.937537  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:05.945463  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:05.945494  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:06.437007  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:06.444713  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:06.444741  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:06.937338  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:06.944929  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:06.944966  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:07.437445  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:07.445419  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:07.445446  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:07.936637  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:07.944840  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:07.944885  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:08.437495  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:08.445557  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:08.445585  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:08.937190  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:08.944787  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:08.944821  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:09.437390  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:09.445055  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:09.445085  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:09.936696  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:09.944349  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:09.944378  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:10.436715  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:10.444864  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:10.444896  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:10.937710  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:10.958557  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:10.958609  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:11.436858  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:11.445296  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:11.445333  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:11.936828  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:11.945832  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:11.945865  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:12.437424  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:12.445189  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:12.445217  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:12.936696  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:12.964827  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:12.964860  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:13.437394  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:13.518785  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:13.518812  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:13.937363  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:13.946154  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:13.946180  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:14.437651  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:14.446662  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:14.446685  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:14.937590  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:14.946338  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:14.946372  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:15.436673  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:15.444387  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:15.444414  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:15.936799  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:15.944515  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:15.944550  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:16.436929  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:16.444925  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:16.444953  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:16.937437  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:16.945356  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:16.945386  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:17.436870  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:17.444376  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:17.444406  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:17.936616  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:17.945251  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:17.945283  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:18.436852  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:18.445428  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:18.445456  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:18.937497  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:18.945269  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:18.945301  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:19.436640  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:19.444699  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:19.444726  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:19.937434  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:19.945091  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:19.945128  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:20.437629  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:20.446700  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:20.446750  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:20.937076  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:20.944861  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:20.944894  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:21.437433  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:21.453490  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:21.453523  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:21.936724  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:21.944982  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:21.945016  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:22.437428  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:22.445431  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:22.445515  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:22.936950  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:22.944851  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:22.944877  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:23.437263  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:23.461078  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:23.461108  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:23.937412  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:23.944944  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:23.944974  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:24.437366  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:24.445579  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:24.445611  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:24.937066  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:24.944726  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:24.944755  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:25.437350  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:25.447479  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:25.447508  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:25.936815  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:25.945003  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:25.945050  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:26.437576  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:26.445360  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:26.445403  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:26.936690  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:26.944420  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:26.944452  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:27.436632  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:27.444363  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:27.444395  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:27.937210  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:27.944911  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:27.944937  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:28.437459  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:28.445678  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:28.445705  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:28.936704  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:28.944487  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:28.944538  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:29.437016  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:29.444711  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:29.444756  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:29.937380  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:29.945062  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:29.945094  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:30.437668  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:30.445376  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:30.445406  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:30.936702  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:30.944347  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:30.944390  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:31.436703  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:31.444674  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:31.444714  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:31.937228  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:31.946281  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:31.946360  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:32.437448  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:32.445408  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:32.445434  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:32.936661  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:32.944693  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:32.944719  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:33.437265  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:33.444900  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:33.444943  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:33.937529  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:33.945261  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:33.945288  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:34.436705  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:34.444475  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:34.444504  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:34.937164  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:34.944979  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:34.945007  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:35.437667  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:35.445578  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:35.445613  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:35.937032  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:35.944757  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:35.944798  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:36.437381  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:36.445128  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:36.445162  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:36.937354  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:36.949512  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:36.949590  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:37.436790  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:37.444710  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:37.444750  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:37.937135  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:37.944855  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:37.944883  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:38.437293  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:38.445195  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:38.445225  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:38.936817  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:38.944740  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:38.944770  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:39.437419  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:39.445247  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:39.445279  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:39.937245  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:39.944785  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:39.944813  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:40.437284  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:40.634573  962848 api_server.go:269] stopped: https://192.168.58.2:8443/healthz: Get "https://192.168.58.2:8443/healthz": read tcp 192.168.58.1:53520->192.168.58.2:8443: read: connection reset by peer
	I1007 11:06:40.937040  962848 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 11:06:40.937154  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 11:06:40.978032  962848 cri.go:89] found id: "6a47676e23524d820ec5aa75ed799c7bb40e8abe63fbbb9cb432b6013308d636"
	I1007 11:06:40.978056  962848 cri.go:89] found id: "c570855fc50bfffd9530a1bfed1712a3426797f75b44cfe6106cf4c2a5b9bb44"
	I1007 11:06:40.978062  962848 cri.go:89] found id: ""
	I1007 11:06:40.978070  962848 logs.go:282] 2 containers: [6a47676e23524d820ec5aa75ed799c7bb40e8abe63fbbb9cb432b6013308d636 c570855fc50bfffd9530a1bfed1712a3426797f75b44cfe6106cf4c2a5b9bb44]
	I1007 11:06:40.978129  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:40.982740  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:40.986201  962848 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 11:06:40.986272  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 11:06:41.026483  962848 cri.go:89] found id: "14dc6d2f19e229c9a793ddeb4595f99aacf16e29619bb398ccc5edb1b72055d3"
	I1007 11:06:41.026506  962848 cri.go:89] found id: "46241c4d338c6d83da36f4ca804ceebe00a71970936c031c1e38872831edf76f"
	I1007 11:06:41.026512  962848 cri.go:89] found id: ""
	I1007 11:06:41.026520  962848 logs.go:282] 2 containers: [14dc6d2f19e229c9a793ddeb4595f99aacf16e29619bb398ccc5edb1b72055d3 46241c4d338c6d83da36f4ca804ceebe00a71970936c031c1e38872831edf76f]
	I1007 11:06:41.026580  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:41.030892  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:41.034539  962848 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 11:06:41.034648  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 11:06:41.090953  962848 cri.go:89] found id: ""
	I1007 11:06:41.090976  962848 logs.go:282] 0 containers: []
	W1007 11:06:41.090986  962848 logs.go:284] No container was found matching "coredns"
	I1007 11:06:41.090992  962848 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 11:06:41.091099  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 11:06:41.148796  962848 cri.go:89] found id: "488746755f4ec365475c0e80db0b5877d018124229da8381a6790a558a873537"
	I1007 11:06:41.148825  962848 cri.go:89] found id: "7ad8caf8cd8e5c1958f276a821efa4d740b902314703bfba54aacad1b3fa3655"
	I1007 11:06:41.148831  962848 cri.go:89] found id: ""
	I1007 11:06:41.148840  962848 logs.go:282] 2 containers: [488746755f4ec365475c0e80db0b5877d018124229da8381a6790a558a873537 7ad8caf8cd8e5c1958f276a821efa4d740b902314703bfba54aacad1b3fa3655]
	I1007 11:06:41.148951  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:41.152980  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:41.157782  962848 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 11:06:41.157883  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 11:06:41.213094  962848 cri.go:89] found id: "547596001ccc76d3f259e43f366a34e5fd130bd58649556a53c6f31a85835164"
	I1007 11:06:41.213118  962848 cri.go:89] found id: ""
	I1007 11:06:41.213126  962848 logs.go:282] 1 containers: [547596001ccc76d3f259e43f366a34e5fd130bd58649556a53c6f31a85835164]
	I1007 11:06:41.213215  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:41.219278  962848 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 11:06:41.219384  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 11:06:41.275259  962848 cri.go:89] found id: "9b01f8fd73c98a6f407a754d4c5d98e9cb16b3a8a5df55b6a5684f0a28627950"
	I1007 11:06:41.275332  962848 cri.go:89] found id: "3ea0ede952316d89fa1317d2d792cac676a3835028641953359eed60bd42d4b9"
	I1007 11:06:41.275360  962848 cri.go:89] found id: ""
	I1007 11:06:41.275383  962848 logs.go:282] 2 containers: [9b01f8fd73c98a6f407a754d4c5d98e9cb16b3a8a5df55b6a5684f0a28627950 3ea0ede952316d89fa1317d2d792cac676a3835028641953359eed60bd42d4b9]
	I1007 11:06:41.275467  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:41.279711  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:41.283523  962848 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 11:06:41.283658  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 11:06:41.336984  962848 cri.go:89] found id: "b70db3fb1e140b4493b769b4f4592ad264693fc38a3582992bc52195c1e78832"
	I1007 11:06:41.337057  962848 cri.go:89] found id: ""
	I1007 11:06:41.337078  962848 logs.go:282] 1 containers: [b70db3fb1e140b4493b769b4f4592ad264693fc38a3582992bc52195c1e78832]
	I1007 11:06:41.337161  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:41.341244  962848 logs.go:123] Gathering logs for dmesg ...
	I1007 11:06:41.341329  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 11:06:41.359087  962848 logs.go:123] Gathering logs for kube-apiserver [6a47676e23524d820ec5aa75ed799c7bb40e8abe63fbbb9cb432b6013308d636] ...
	I1007 11:06:41.359169  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a47676e23524d820ec5aa75ed799c7bb40e8abe63fbbb9cb432b6013308d636"
	I1007 11:06:41.423286  962848 logs.go:123] Gathering logs for kube-apiserver [c570855fc50bfffd9530a1bfed1712a3426797f75b44cfe6106cf4c2a5b9bb44] ...
	I1007 11:06:41.423361  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c570855fc50bfffd9530a1bfed1712a3426797f75b44cfe6106cf4c2a5b9bb44"
	I1007 11:06:41.469406  962848 logs.go:123] Gathering logs for CRI-O ...
	I1007 11:06:41.469432  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 11:06:41.549592  962848 logs.go:123] Gathering logs for etcd [14dc6d2f19e229c9a793ddeb4595f99aacf16e29619bb398ccc5edb1b72055d3] ...
	I1007 11:06:41.549673  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14dc6d2f19e229c9a793ddeb4595f99aacf16e29619bb398ccc5edb1b72055d3"
	I1007 11:06:41.641318  962848 logs.go:123] Gathering logs for kube-proxy [547596001ccc76d3f259e43f366a34e5fd130bd58649556a53c6f31a85835164] ...
	I1007 11:06:41.641405  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 547596001ccc76d3f259e43f366a34e5fd130bd58649556a53c6f31a85835164"
	I1007 11:06:41.720391  962848 logs.go:123] Gathering logs for kube-controller-manager [9b01f8fd73c98a6f407a754d4c5d98e9cb16b3a8a5df55b6a5684f0a28627950] ...
	I1007 11:06:41.720417  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b01f8fd73c98a6f407a754d4c5d98e9cb16b3a8a5df55b6a5684f0a28627950"
	I1007 11:06:41.809611  962848 logs.go:123] Gathering logs for kube-controller-manager [3ea0ede952316d89fa1317d2d792cac676a3835028641953359eed60bd42d4b9] ...
	I1007 11:06:41.809699  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ea0ede952316d89fa1317d2d792cac676a3835028641953359eed60bd42d4b9"
	I1007 11:06:41.856952  962848 logs.go:123] Gathering logs for kindnet [b70db3fb1e140b4493b769b4f4592ad264693fc38a3582992bc52195c1e78832] ...
	I1007 11:06:41.856980  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b70db3fb1e140b4493b769b4f4592ad264693fc38a3582992bc52195c1e78832"
	I1007 11:06:41.907256  962848 logs.go:123] Gathering logs for kubelet ...
	I1007 11:06:41.907333  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 11:06:41.992782  962848 logs.go:123] Gathering logs for describe nodes ...
	I1007 11:06:41.992861  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 11:06:42.433985  962848 logs.go:123] Gathering logs for etcd [46241c4d338c6d83da36f4ca804ceebe00a71970936c031c1e38872831edf76f] ...
	I1007 11:06:42.434062  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46241c4d338c6d83da36f4ca804ceebe00a71970936c031c1e38872831edf76f"
	I1007 11:06:42.513324  962848 logs.go:123] Gathering logs for kube-scheduler [7ad8caf8cd8e5c1958f276a821efa4d740b902314703bfba54aacad1b3fa3655] ...
	I1007 11:06:42.513405  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ad8caf8cd8e5c1958f276a821efa4d740b902314703bfba54aacad1b3fa3655"
	I1007 11:06:42.559185  962848 logs.go:123] Gathering logs for container status ...
	I1007 11:06:42.559254  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 11:06:42.627600  962848 logs.go:123] Gathering logs for kube-scheduler [488746755f4ec365475c0e80db0b5877d018124229da8381a6790a558a873537] ...
	I1007 11:06:42.627672  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 488746755f4ec365475c0e80db0b5877d018124229da8381a6790a558a873537"
	I1007 11:06:45.201694  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:45.215163  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 11:06:45.215203  962848 api_server.go:103] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 11:06:45.215344  962848 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 11:06:45.215444  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 11:06:45.294278  962848 cri.go:89] found id: "6a47676e23524d820ec5aa75ed799c7bb40e8abe63fbbb9cb432b6013308d636"
	I1007 11:06:45.294304  962848 cri.go:89] found id: "c570855fc50bfffd9530a1bfed1712a3426797f75b44cfe6106cf4c2a5b9bb44"
	I1007 11:06:45.294311  962848 cri.go:89] found id: ""
	I1007 11:06:45.294318  962848 logs.go:282] 2 containers: [6a47676e23524d820ec5aa75ed799c7bb40e8abe63fbbb9cb432b6013308d636 c570855fc50bfffd9530a1bfed1712a3426797f75b44cfe6106cf4c2a5b9bb44]
	I1007 11:06:45.294413  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:45.300010  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:45.308357  962848 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 11:06:45.308493  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 11:06:45.390186  962848 cri.go:89] found id: "14dc6d2f19e229c9a793ddeb4595f99aacf16e29619bb398ccc5edb1b72055d3"
	I1007 11:06:45.390253  962848 cri.go:89] found id: "46241c4d338c6d83da36f4ca804ceebe00a71970936c031c1e38872831edf76f"
	I1007 11:06:45.390281  962848 cri.go:89] found id: ""
	I1007 11:06:45.390293  962848 logs.go:282] 2 containers: [14dc6d2f19e229c9a793ddeb4595f99aacf16e29619bb398ccc5edb1b72055d3 46241c4d338c6d83da36f4ca804ceebe00a71970936c031c1e38872831edf76f]
	I1007 11:06:45.390570  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:45.398031  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:45.402744  962848 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 11:06:45.402856  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 11:06:45.469909  962848 cri.go:89] found id: ""
	I1007 11:06:45.469936  962848 logs.go:282] 0 containers: []
	W1007 11:06:45.469945  962848 logs.go:284] No container was found matching "coredns"
	I1007 11:06:45.469952  962848 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 11:06:45.470069  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 11:06:45.525721  962848 cri.go:89] found id: "488746755f4ec365475c0e80db0b5877d018124229da8381a6790a558a873537"
	I1007 11:06:45.525744  962848 cri.go:89] found id: "7ad8caf8cd8e5c1958f276a821efa4d740b902314703bfba54aacad1b3fa3655"
	I1007 11:06:45.525749  962848 cri.go:89] found id: ""
	I1007 11:06:45.525755  962848 logs.go:282] 2 containers: [488746755f4ec365475c0e80db0b5877d018124229da8381a6790a558a873537 7ad8caf8cd8e5c1958f276a821efa4d740b902314703bfba54aacad1b3fa3655]
	I1007 11:06:45.525860  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:45.530929  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:45.534888  962848 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 11:06:45.534987  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 11:06:45.591275  962848 cri.go:89] found id: "547596001ccc76d3f259e43f366a34e5fd130bd58649556a53c6f31a85835164"
	I1007 11:06:45.591298  962848 cri.go:89] found id: ""
	I1007 11:06:45.591306  962848 logs.go:282] 1 containers: [547596001ccc76d3f259e43f366a34e5fd130bd58649556a53c6f31a85835164]
	I1007 11:06:45.591393  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:45.595162  962848 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 11:06:45.595260  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 11:06:45.668871  962848 cri.go:89] found id: "9b01f8fd73c98a6f407a754d4c5d98e9cb16b3a8a5df55b6a5684f0a28627950"
	I1007 11:06:45.668893  962848 cri.go:89] found id: "3ea0ede952316d89fa1317d2d792cac676a3835028641953359eed60bd42d4b9"
	I1007 11:06:45.668898  962848 cri.go:89] found id: ""
	I1007 11:06:45.668905  962848 logs.go:282] 2 containers: [9b01f8fd73c98a6f407a754d4c5d98e9cb16b3a8a5df55b6a5684f0a28627950 3ea0ede952316d89fa1317d2d792cac676a3835028641953359eed60bd42d4b9]
	I1007 11:06:45.668999  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:45.673760  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:45.677331  962848 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 11:06:45.677425  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 11:06:45.738820  962848 cri.go:89] found id: "b70db3fb1e140b4493b769b4f4592ad264693fc38a3582992bc52195c1e78832"
	I1007 11:06:45.738842  962848 cri.go:89] found id: ""
	I1007 11:06:45.738849  962848 logs.go:282] 1 containers: [b70db3fb1e140b4493b769b4f4592ad264693fc38a3582992bc52195c1e78832]
	I1007 11:06:45.738910  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:45.742528  962848 logs.go:123] Gathering logs for kubelet ...
	I1007 11:06:45.742553  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 11:06:45.833024  962848 logs.go:123] Gathering logs for etcd [14dc6d2f19e229c9a793ddeb4595f99aacf16e29619bb398ccc5edb1b72055d3] ...
	I1007 11:06:45.833101  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14dc6d2f19e229c9a793ddeb4595f99aacf16e29619bb398ccc5edb1b72055d3"
	I1007 11:06:45.886974  962848 logs.go:123] Gathering logs for etcd [46241c4d338c6d83da36f4ca804ceebe00a71970936c031c1e38872831edf76f] ...
	I1007 11:06:45.887008  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46241c4d338c6d83da36f4ca804ceebe00a71970936c031c1e38872831edf76f"
	I1007 11:06:45.956141  962848 logs.go:123] Gathering logs for kube-scheduler [488746755f4ec365475c0e80db0b5877d018124229da8381a6790a558a873537] ...
	I1007 11:06:45.956176  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 488746755f4ec365475c0e80db0b5877d018124229da8381a6790a558a873537"
	I1007 11:06:45.996233  962848 logs.go:123] Gathering logs for kindnet [b70db3fb1e140b4493b769b4f4592ad264693fc38a3582992bc52195c1e78832] ...
	I1007 11:06:45.996299  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b70db3fb1e140b4493b769b4f4592ad264693fc38a3582992bc52195c1e78832"
	I1007 11:06:46.039467  962848 logs.go:123] Gathering logs for CRI-O ...
	I1007 11:06:46.039499  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 11:06:46.116473  962848 logs.go:123] Gathering logs for describe nodes ...
	I1007 11:06:46.116512  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 11:06:46.369314  962848 logs.go:123] Gathering logs for kube-apiserver [c570855fc50bfffd9530a1bfed1712a3426797f75b44cfe6106cf4c2a5b9bb44] ...
	I1007 11:06:46.369350  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c570855fc50bfffd9530a1bfed1712a3426797f75b44cfe6106cf4c2a5b9bb44"
	I1007 11:06:46.412108  962848 logs.go:123] Gathering logs for kube-controller-manager [9b01f8fd73c98a6f407a754d4c5d98e9cb16b3a8a5df55b6a5684f0a28627950] ...
	I1007 11:06:46.412137  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b01f8fd73c98a6f407a754d4c5d98e9cb16b3a8a5df55b6a5684f0a28627950"
	I1007 11:06:46.471623  962848 logs.go:123] Gathering logs for container status ...
	I1007 11:06:46.471662  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 11:06:46.524839  962848 logs.go:123] Gathering logs for dmesg ...
	I1007 11:06:46.524878  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 11:06:46.541779  962848 logs.go:123] Gathering logs for kube-apiserver [6a47676e23524d820ec5aa75ed799c7bb40e8abe63fbbb9cb432b6013308d636] ...
	I1007 11:06:46.541811  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a47676e23524d820ec5aa75ed799c7bb40e8abe63fbbb9cb432b6013308d636"
	I1007 11:06:46.595697  962848 logs.go:123] Gathering logs for kube-scheduler [7ad8caf8cd8e5c1958f276a821efa4d740b902314703bfba54aacad1b3fa3655] ...
	I1007 11:06:46.595732  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ad8caf8cd8e5c1958f276a821efa4d740b902314703bfba54aacad1b3fa3655"
	I1007 11:06:46.639928  962848 logs.go:123] Gathering logs for kube-proxy [547596001ccc76d3f259e43f366a34e5fd130bd58649556a53c6f31a85835164] ...
	I1007 11:06:46.639958  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 547596001ccc76d3f259e43f366a34e5fd130bd58649556a53c6f31a85835164"
	I1007 11:06:46.695109  962848 logs.go:123] Gathering logs for kube-controller-manager [3ea0ede952316d89fa1317d2d792cac676a3835028641953359eed60bd42d4b9] ...
	I1007 11:06:46.695139  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ea0ede952316d89fa1317d2d792cac676a3835028641953359eed60bd42d4b9"
	I1007 11:06:49.232855  962848 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1007 11:06:49.244748  962848 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1007 11:06:49.244843  962848 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1007 11:06:49.244850  962848 round_trippers.go:469] Request Headers:
	I1007 11:06:49.244858  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:06:49.244863  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:06:49.264745  962848 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I1007 11:06:49.265176  962848 api_server.go:141] control plane version: v1.31.1
	I1007 11:06:49.265203  962848 api_server.go:131] duration metric: took 46.828637845s to wait for apiserver health ...
	I1007 11:06:49.265213  962848 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 11:06:49.265247  962848 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 11:06:49.265318  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 11:06:49.315913  962848 cri.go:89] found id: "6a47676e23524d820ec5aa75ed799c7bb40e8abe63fbbb9cb432b6013308d636"
	I1007 11:06:49.315990  962848 cri.go:89] found id: "c570855fc50bfffd9530a1bfed1712a3426797f75b44cfe6106cf4c2a5b9bb44"
	I1007 11:06:49.316010  962848 cri.go:89] found id: ""
	I1007 11:06:49.316033  962848 logs.go:282] 2 containers: [6a47676e23524d820ec5aa75ed799c7bb40e8abe63fbbb9cb432b6013308d636 c570855fc50bfffd9530a1bfed1712a3426797f75b44cfe6106cf4c2a5b9bb44]
	I1007 11:06:49.316119  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:49.320166  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:49.323808  962848 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 11:06:49.323934  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 11:06:49.383102  962848 cri.go:89] found id: "14dc6d2f19e229c9a793ddeb4595f99aacf16e29619bb398ccc5edb1b72055d3"
	I1007 11:06:49.383121  962848 cri.go:89] found id: "46241c4d338c6d83da36f4ca804ceebe00a71970936c031c1e38872831edf76f"
	I1007 11:06:49.383126  962848 cri.go:89] found id: ""
	I1007 11:06:49.383133  962848 logs.go:282] 2 containers: [14dc6d2f19e229c9a793ddeb4595f99aacf16e29619bb398ccc5edb1b72055d3 46241c4d338c6d83da36f4ca804ceebe00a71970936c031c1e38872831edf76f]
	I1007 11:06:49.383192  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:49.387734  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:49.391569  962848 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 11:06:49.391634  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 11:06:49.443374  962848 cri.go:89] found id: ""
	I1007 11:06:49.443397  962848 logs.go:282] 0 containers: []
	W1007 11:06:49.443406  962848 logs.go:284] No container was found matching "coredns"
	I1007 11:06:49.443412  962848 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 11:06:49.443472  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 11:06:49.494126  962848 cri.go:89] found id: "488746755f4ec365475c0e80db0b5877d018124229da8381a6790a558a873537"
	I1007 11:06:49.494149  962848 cri.go:89] found id: "7ad8caf8cd8e5c1958f276a821efa4d740b902314703bfba54aacad1b3fa3655"
	I1007 11:06:49.494155  962848 cri.go:89] found id: ""
	I1007 11:06:49.494161  962848 logs.go:282] 2 containers: [488746755f4ec365475c0e80db0b5877d018124229da8381a6790a558a873537 7ad8caf8cd8e5c1958f276a821efa4d740b902314703bfba54aacad1b3fa3655]
	I1007 11:06:49.494217  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:49.498540  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:49.507292  962848 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 11:06:49.507386  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 11:06:49.561684  962848 cri.go:89] found id: "547596001ccc76d3f259e43f366a34e5fd130bd58649556a53c6f31a85835164"
	I1007 11:06:49.561715  962848 cri.go:89] found id: ""
	I1007 11:06:49.561723  962848 logs.go:282] 1 containers: [547596001ccc76d3f259e43f366a34e5fd130bd58649556a53c6f31a85835164]
	I1007 11:06:49.561809  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:49.573861  962848 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 11:06:49.573973  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 11:06:49.637424  962848 cri.go:89] found id: "9b01f8fd73c98a6f407a754d4c5d98e9cb16b3a8a5df55b6a5684f0a28627950"
	I1007 11:06:49.637445  962848 cri.go:89] found id: "3ea0ede952316d89fa1317d2d792cac676a3835028641953359eed60bd42d4b9"
	I1007 11:06:49.637450  962848 cri.go:89] found id: ""
	I1007 11:06:49.637457  962848 logs.go:282] 2 containers: [9b01f8fd73c98a6f407a754d4c5d98e9cb16b3a8a5df55b6a5684f0a28627950 3ea0ede952316d89fa1317d2d792cac676a3835028641953359eed60bd42d4b9]
	I1007 11:06:49.637549  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:49.643559  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:49.648036  962848 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 11:06:49.648129  962848 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 11:06:49.692453  962848 cri.go:89] found id: "b70db3fb1e140b4493b769b4f4592ad264693fc38a3582992bc52195c1e78832"
	I1007 11:06:49.692475  962848 cri.go:89] found id: ""
	I1007 11:06:49.692483  962848 logs.go:282] 1 containers: [b70db3fb1e140b4493b769b4f4592ad264693fc38a3582992bc52195c1e78832]
	I1007 11:06:49.692568  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:49.696072  962848 logs.go:123] Gathering logs for kube-scheduler [488746755f4ec365475c0e80db0b5877d018124229da8381a6790a558a873537] ...
	I1007 11:06:49.696098  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 488746755f4ec365475c0e80db0b5877d018124229da8381a6790a558a873537"
	I1007 11:06:49.739144  962848 logs.go:123] Gathering logs for kube-controller-manager [3ea0ede952316d89fa1317d2d792cac676a3835028641953359eed60bd42d4b9] ...
	I1007 11:06:49.739174  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ea0ede952316d89fa1317d2d792cac676a3835028641953359eed60bd42d4b9"
	I1007 11:06:49.782166  962848 logs.go:123] Gathering logs for CRI-O ...
	I1007 11:06:49.782202  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 11:06:49.850273  962848 logs.go:123] Gathering logs for container status ...
	I1007 11:06:49.850308  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 11:06:49.915309  962848 logs.go:123] Gathering logs for describe nodes ...
	I1007 11:06:49.915388  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 11:06:50.188976  962848 logs.go:123] Gathering logs for kube-apiserver [6a47676e23524d820ec5aa75ed799c7bb40e8abe63fbbb9cb432b6013308d636] ...
	I1007 11:06:50.189026  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a47676e23524d820ec5aa75ed799c7bb40e8abe63fbbb9cb432b6013308d636"
	I1007 11:06:50.249010  962848 logs.go:123] Gathering logs for etcd [14dc6d2f19e229c9a793ddeb4595f99aacf16e29619bb398ccc5edb1b72055d3] ...
	I1007 11:06:50.249074  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14dc6d2f19e229c9a793ddeb4595f99aacf16e29619bb398ccc5edb1b72055d3"
	I1007 11:06:50.305914  962848 logs.go:123] Gathering logs for kindnet [b70db3fb1e140b4493b769b4f4592ad264693fc38a3582992bc52195c1e78832] ...
	I1007 11:06:50.305953  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b70db3fb1e140b4493b769b4f4592ad264693fc38a3582992bc52195c1e78832"
	I1007 11:06:50.347829  962848 logs.go:123] Gathering logs for kubelet ...
	I1007 11:06:50.347858  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 11:06:50.432066  962848 logs.go:123] Gathering logs for kube-proxy [547596001ccc76d3f259e43f366a34e5fd130bd58649556a53c6f31a85835164] ...
	I1007 11:06:50.432105  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 547596001ccc76d3f259e43f366a34e5fd130bd58649556a53c6f31a85835164"
	I1007 11:06:50.472715  962848 logs.go:123] Gathering logs for kube-scheduler [7ad8caf8cd8e5c1958f276a821efa4d740b902314703bfba54aacad1b3fa3655] ...
	I1007 11:06:50.472751  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ad8caf8cd8e5c1958f276a821efa4d740b902314703bfba54aacad1b3fa3655"
	I1007 11:06:50.509964  962848 logs.go:123] Gathering logs for kube-controller-manager [9b01f8fd73c98a6f407a754d4c5d98e9cb16b3a8a5df55b6a5684f0a28627950] ...
	I1007 11:06:50.509991  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b01f8fd73c98a6f407a754d4c5d98e9cb16b3a8a5df55b6a5684f0a28627950"
	I1007 11:06:50.573399  962848 logs.go:123] Gathering logs for dmesg ...
	I1007 11:06:50.573436  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 11:06:50.591242  962848 logs.go:123] Gathering logs for kube-apiserver [c570855fc50bfffd9530a1bfed1712a3426797f75b44cfe6106cf4c2a5b9bb44] ...
	I1007 11:06:50.591337  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c570855fc50bfffd9530a1bfed1712a3426797f75b44cfe6106cf4c2a5b9bb44"
	I1007 11:06:50.632925  962848 logs.go:123] Gathering logs for etcd [46241c4d338c6d83da36f4ca804ceebe00a71970936c031c1e38872831edf76f] ...
	I1007 11:06:50.633005  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46241c4d338c6d83da36f4ca804ceebe00a71970936c031c1e38872831edf76f"
	I1007 11:06:53.195158  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1007 11:06:53.195184  962848 round_trippers.go:469] Request Headers:
	I1007 11:06:53.195195  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:06:53.195200  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:06:53.202298  962848 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 11:06:53.209101  962848 system_pods.go:59] 19 kube-system pods found
	I1007 11:06:53.209147  962848 system_pods.go:61] "coredns-7c65d6cfc9-b5fbm" [8ff7a7b0-7994-4515-adb3-21d9c278c139] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 11:06:53.209158  962848 system_pods.go:61] "coredns-7c65d6cfc9-z86x9" [033b2bd9-3064-4c1b-b99e-a15c1837996b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 11:06:53.209164  962848 system_pods.go:61] "etcd-ha-685971" [277a5359-5a84-4fe9-a32a-1e832735ef6f] Running
	I1007 11:06:53.209170  962848 system_pods.go:61] "etcd-ha-685971-m02" [cc26ada5-4442-4448-b4e7-966f3abc823d] Running
	I1007 11:06:53.209175  962848 system_pods.go:61] "kindnet-hgwsj" [b587480e-87cd-45c8-8d70-698f690b4a8e] Running
	I1007 11:06:53.209179  962848 system_pods.go:61] "kindnet-l95rf" [54b2a899-d13c-4865-93bb-37ec6a91e46d] Running
	I1007 11:06:53.209183  962848 system_pods.go:61] "kindnet-wn9mj" [078379e4-19c8-4c0a-846d-51890ce0ad4c] Running
	I1007 11:06:53.209189  962848 system_pods.go:61] "kube-apiserver-ha-685971" [8402918c-cca3-453a-bb6d-7bc6e0eaca5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1007 11:06:53.209200  962848 system_pods.go:61] "kube-apiserver-ha-685971-m02" [126ceb01-dc74-4331-adcf-7aa7e4c60288] Running
	I1007 11:06:53.209207  962848 system_pods.go:61] "kube-controller-manager-ha-685971" [d4398ae9-200e-40ce-8d1c-d32e64042a25] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1007 11:06:53.209215  962848 system_pods.go:61] "kube-controller-manager-ha-685971-m02" [a1ffa8b1-3097-476e-880d-72b40ad7f4a5] Running
	I1007 11:06:53.209219  962848 system_pods.go:61] "kube-proxy-4frrm" [8277da80-4818-4f83-80b7-2592d6706fe3] Running
	I1007 11:06:53.209231  962848 system_pods.go:61] "kube-proxy-787s7" [e1e0fe0b-01c5-4552-a8bc-0c39087e2117] Running
	I1007 11:06:53.209239  962848 system_pods.go:61] "kube-proxy-bdxpj" [ec6c7cb5-7053-4ea7-a97f-9f9427a8fbcc] Running
	I1007 11:06:53.209243  962848 system_pods.go:61] "kube-scheduler-ha-685971" [f7695c11-e967-4945-8d36-57a3484d6bb2] Running
	I1007 11:06:53.209249  962848 system_pods.go:61] "kube-scheduler-ha-685971-m02" [7b98e145-a920-4ef9-8267-7677a524641a] Running
	I1007 11:06:53.209256  962848 system_pods.go:61] "kube-vip-ha-685971" [e87a7460-40b8-4a02-b7fe-89de9f9d43f1] Running
	I1007 11:06:53.209259  962848 system_pods.go:61] "kube-vip-ha-685971-m02" [2814d8cd-ad45-4321-a95a-879c62a9026d] Running
	I1007 11:06:53.209264  962848 system_pods.go:61] "storage-provisioner" [a30fd0ad-d5d6-49b0-8ea4-11b4033abc79] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1007 11:06:53.209274  962848 system_pods.go:74] duration metric: took 3.944056268s to wait for pod list to return data ...
	I1007 11:06:53.209289  962848 default_sa.go:34] waiting for default service account to be created ...
	I1007 11:06:53.209394  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1007 11:06:53.209405  962848 round_trippers.go:469] Request Headers:
	I1007 11:06:53.209413  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:06:53.209417  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:06:53.212396  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:06:53.212689  962848 default_sa.go:45] found service account: "default"
	I1007 11:06:53.212703  962848 default_sa.go:55] duration metric: took 3.407757ms for default service account to be created ...
	I1007 11:06:53.212713  962848 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 11:06:53.212771  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1007 11:06:53.212776  962848 round_trippers.go:469] Request Headers:
	I1007 11:06:53.212784  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:06:53.212787  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:06:53.217518  962848 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 11:06:53.225349  962848 system_pods.go:86] 19 kube-system pods found
	I1007 11:06:53.225391  962848 system_pods.go:89] "coredns-7c65d6cfc9-b5fbm" [8ff7a7b0-7994-4515-adb3-21d9c278c139] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 11:06:53.225402  962848 system_pods.go:89] "coredns-7c65d6cfc9-z86x9" [033b2bd9-3064-4c1b-b99e-a15c1837996b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 11:06:53.225409  962848 system_pods.go:89] "etcd-ha-685971" [277a5359-5a84-4fe9-a32a-1e832735ef6f] Running
	I1007 11:06:53.225413  962848 system_pods.go:89] "etcd-ha-685971-m02" [cc26ada5-4442-4448-b4e7-966f3abc823d] Running
	I1007 11:06:53.225418  962848 system_pods.go:89] "kindnet-hgwsj" [b587480e-87cd-45c8-8d70-698f690b4a8e] Running
	I1007 11:06:53.225428  962848 system_pods.go:89] "kindnet-l95rf" [54b2a899-d13c-4865-93bb-37ec6a91e46d] Running
	I1007 11:06:53.225438  962848 system_pods.go:89] "kindnet-wn9mj" [078379e4-19c8-4c0a-846d-51890ce0ad4c] Running
	I1007 11:06:53.225444  962848 system_pods.go:89] "kube-apiserver-ha-685971" [8402918c-cca3-453a-bb6d-7bc6e0eaca5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1007 11:06:53.225453  962848 system_pods.go:89] "kube-apiserver-ha-685971-m02" [126ceb01-dc74-4331-adcf-7aa7e4c60288] Running
	I1007 11:06:53.225460  962848 system_pods.go:89] "kube-controller-manager-ha-685971" [d4398ae9-200e-40ce-8d1c-d32e64042a25] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1007 11:06:53.225469  962848 system_pods.go:89] "kube-controller-manager-ha-685971-m02" [a1ffa8b1-3097-476e-880d-72b40ad7f4a5] Running
	I1007 11:06:53.225478  962848 system_pods.go:89] "kube-proxy-4frrm" [8277da80-4818-4f83-80b7-2592d6706fe3] Running
	I1007 11:06:53.225482  962848 system_pods.go:89] "kube-proxy-787s7" [e1e0fe0b-01c5-4552-a8bc-0c39087e2117] Running
	I1007 11:06:53.225487  962848 system_pods.go:89] "kube-proxy-bdxpj" [ec6c7cb5-7053-4ea7-a97f-9f9427a8fbcc] Running
	I1007 11:06:53.225496  962848 system_pods.go:89] "kube-scheduler-ha-685971" [f7695c11-e967-4945-8d36-57a3484d6bb2] Running
	I1007 11:06:53.225500  962848 system_pods.go:89] "kube-scheduler-ha-685971-m02" [7b98e145-a920-4ef9-8267-7677a524641a] Running
	I1007 11:06:53.225504  962848 system_pods.go:89] "kube-vip-ha-685971" [e87a7460-40b8-4a02-b7fe-89de9f9d43f1] Running
	I1007 11:06:53.225508  962848 system_pods.go:89] "kube-vip-ha-685971-m02" [2814d8cd-ad45-4321-a95a-879c62a9026d] Running
	I1007 11:06:53.225516  962848 system_pods.go:89] "storage-provisioner" [a30fd0ad-d5d6-49b0-8ea4-11b4033abc79] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1007 11:06:53.225533  962848 system_pods.go:126] duration metric: took 12.814704ms to wait for k8s-apps to be running ...
	I1007 11:06:53.225548  962848 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 11:06:53.225604  962848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 11:06:53.239873  962848 system_svc.go:56] duration metric: took 14.314591ms WaitForService to wait for kubelet
	I1007 11:06:53.239903  962848 kubeadm.go:582] duration metric: took 1m12.581135906s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 11:06:53.239921  962848 node_conditions.go:102] verifying NodePressure condition ...
	I1007 11:06:53.239993  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1007 11:06:53.240003  962848 round_trippers.go:469] Request Headers:
	I1007 11:06:53.240011  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:06:53.240016  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:06:53.243348  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:06:53.245564  962848 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 11:06:53.245637  962848 node_conditions.go:123] node cpu capacity is 2
	I1007 11:06:53.245664  962848 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 11:06:53.245686  962848 node_conditions.go:123] node cpu capacity is 2
	I1007 11:06:53.245737  962848 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 11:06:53.245763  962848 node_conditions.go:123] node cpu capacity is 2
	I1007 11:06:53.245800  962848 node_conditions.go:105] duration metric: took 5.85858ms to run NodePressure ...
	I1007 11:06:53.245830  962848 start.go:241] waiting for startup goroutines ...
	I1007 11:06:53.245881  962848 start.go:255] writing updated cluster config ...
	I1007 11:06:53.248959  962848 out.go:201] 
	I1007 11:06:53.251589  962848 config.go:182] Loaded profile config "ha-685971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:06:53.251749  962848 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/config.json ...
	I1007 11:06:53.254897  962848 out.go:177] * Starting "ha-685971-m04" worker node in "ha-685971" cluster
	I1007 11:06:53.257941  962848 cache.go:121] Beginning downloading kic base image for docker with crio
	I1007 11:06:53.260476  962848 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 11:06:53.263148  962848 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:06:53.263177  962848 cache.go:56] Caching tarball of preloaded images
	I1007 11:06:53.263236  962848 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 11:06:53.263301  962848 preload.go:172] Found /home/jenkins/minikube-integration/19761-891319/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1007 11:06:53.263312  962848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 11:06:53.263468  962848 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/config.json ...
	I1007 11:06:53.281274  962848 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1007 11:06:53.281299  962848 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1007 11:06:53.281320  962848 cache.go:194] Successfully downloaded all kic artifacts
	I1007 11:06:53.281344  962848 start.go:360] acquireMachinesLock for ha-685971-m04: {Name:mke145bbd8e1eeb9685bb2400f23b3ddadf45e36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:06:53.281412  962848 start.go:364] duration metric: took 48.607µs to acquireMachinesLock for "ha-685971-m04"
	I1007 11:06:53.281435  962848 start.go:96] Skipping create...Using existing machine configuration
	I1007 11:06:53.281442  962848 fix.go:54] fixHost starting: m04
	I1007 11:06:53.281692  962848 cli_runner.go:164] Run: docker container inspect ha-685971-m04 --format={{.State.Status}}
	I1007 11:06:53.298141  962848 fix.go:112] recreateIfNeeded on ha-685971-m04: state=Stopped err=<nil>
	W1007 11:06:53.298171  962848 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 11:06:53.301257  962848 out.go:177] * Restarting existing docker container for "ha-685971-m04" ...
	I1007 11:06:53.303805  962848 cli_runner.go:164] Run: docker start ha-685971-m04
	I1007 11:06:53.609784  962848 cli_runner.go:164] Run: docker container inspect ha-685971-m04 --format={{.State.Status}}
	I1007 11:06:53.634071  962848 kic.go:430] container "ha-685971-m04" state is running.
	I1007 11:06:53.634499  962848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-685971-m04
	I1007 11:06:53.659165  962848 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/config.json ...
	I1007 11:06:53.659413  962848 machine.go:93] provisionDockerMachine start ...
	I1007 11:06:53.659473  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971-m04
	I1007 11:06:53.679351  962848 main.go:141] libmachine: Using SSH client type: native
	I1007 11:06:53.679588  962848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33952 <nil> <nil>}
	I1007 11:06:53.679597  962848 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 11:06:53.680128  962848 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56264->127.0.0.1:33952: read: connection reset by peer
	I1007 11:06:56.827718  962848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-685971-m04
	
	I1007 11:06:56.827747  962848 ubuntu.go:169] provisioning hostname "ha-685971-m04"
	I1007 11:06:56.827825  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971-m04
	I1007 11:06:56.845253  962848 main.go:141] libmachine: Using SSH client type: native
	I1007 11:06:56.845687  962848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33952 <nil> <nil>}
	I1007 11:06:56.845710  962848 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-685971-m04 && echo "ha-685971-m04" | sudo tee /etc/hostname
	I1007 11:06:57.002634  962848 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-685971-m04
	
	I1007 11:06:57.002728  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971-m04
	I1007 11:06:57.027064  962848 main.go:141] libmachine: Using SSH client type: native
	I1007 11:06:57.027318  962848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33952 <nil> <nil>}
	I1007 11:06:57.027339  962848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-685971-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-685971-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-685971-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 11:06:57.172552  962848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 11:06:57.172578  962848 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19761-891319/.minikube CaCertPath:/home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19761-891319/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19761-891319/.minikube}
	I1007 11:06:57.172594  962848 ubuntu.go:177] setting up certificates
	I1007 11:06:57.172604  962848 provision.go:84] configureAuth start
	I1007 11:06:57.172667  962848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-685971-m04
	I1007 11:06:57.197823  962848 provision.go:143] copyHostCerts
	I1007 11:06:57.197867  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19761-891319/.minikube/ca.pem
	I1007 11:06:57.197902  962848 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-891319/.minikube/ca.pem, removing ...
	I1007 11:06:57.197915  962848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-891319/.minikube/ca.pem
	I1007 11:06:57.197990  962848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19761-891319/.minikube/ca.pem (1078 bytes)
	I1007 11:06:57.198081  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19761-891319/.minikube/cert.pem
	I1007 11:06:57.198104  962848 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-891319/.minikube/cert.pem, removing ...
	I1007 11:06:57.198114  962848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-891319/.minikube/cert.pem
	I1007 11:06:57.198143  962848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19761-891319/.minikube/cert.pem (1123 bytes)
	I1007 11:06:57.198189  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19761-891319/.minikube/key.pem
	I1007 11:06:57.198204  962848 exec_runner.go:144] found /home/jenkins/minikube-integration/19761-891319/.minikube/key.pem, removing ...
	I1007 11:06:57.198209  962848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19761-891319/.minikube/key.pem
	I1007 11:06:57.198234  962848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19761-891319/.minikube/key.pem (1679 bytes)
	I1007 11:06:57.198279  962848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca-key.pem org=jenkins.ha-685971-m04 san=[127.0.0.1 192.168.58.5 ha-685971-m04 localhost minikube]
	I1007 11:06:57.643839  962848 provision.go:177] copyRemoteCerts
	I1007 11:06:57.643907  962848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 11:06:57.643950  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971-m04
	I1007 11:06:57.663540  962848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33952 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/ha-685971-m04/id_rsa Username:docker}
	I1007 11:06:57.761733  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 11:06:57.761811  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1007 11:06:57.792080  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 11:06:57.792143  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 11:06:57.820736  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 11:06:57.820799  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 11:06:57.847082  962848 provision.go:87] duration metric: took 674.461823ms to configureAuth
	I1007 11:06:57.847113  962848 ubuntu.go:193] setting minikube options for container-runtime
	I1007 11:06:57.847341  962848 config.go:182] Loaded profile config "ha-685971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:06:57.847449  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971-m04
	I1007 11:06:57.864203  962848 main.go:141] libmachine: Using SSH client type: native
	I1007 11:06:57.864517  962848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33952 <nil> <nil>}
	I1007 11:06:57.864543  962848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 11:06:58.160218  962848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 11:06:58.160264  962848 machine.go:96] duration metric: took 4.500821231s to provisionDockerMachine
	I1007 11:06:58.160278  962848 start.go:293] postStartSetup for "ha-685971-m04" (driver="docker")
	I1007 11:06:58.160290  962848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 11:06:58.160356  962848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 11:06:58.160399  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971-m04
	I1007 11:06:58.178454  962848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33952 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/ha-685971-m04/id_rsa Username:docker}
	I1007 11:06:58.282368  962848 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 11:06:58.286060  962848 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 11:06:58.286099  962848 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 11:06:58.286110  962848 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 11:06:58.286118  962848 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 11:06:58.286128  962848 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-891319/.minikube/addons for local assets ...
	I1007 11:06:58.286193  962848 filesync.go:126] Scanning /home/jenkins/minikube-integration/19761-891319/.minikube/files for local assets ...
	I1007 11:06:58.286275  962848 filesync.go:149] local asset: /home/jenkins/minikube-integration/19761-891319/.minikube/files/etc/ssl/certs/8967262.pem -> 8967262.pem in /etc/ssl/certs
	I1007 11:06:58.286286  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/files/etc/ssl/certs/8967262.pem -> /etc/ssl/certs/8967262.pem
	I1007 11:06:58.286389  962848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 11:06:58.295130  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/files/etc/ssl/certs/8967262.pem --> /etc/ssl/certs/8967262.pem (1708 bytes)
	I1007 11:06:58.322115  962848 start.go:296] duration metric: took 161.820082ms for postStartSetup
	I1007 11:06:58.322197  962848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 11:06:58.322239  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971-m04
	I1007 11:06:58.341377  962848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33952 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/ha-685971-m04/id_rsa Username:docker}
	I1007 11:06:58.433459  962848 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 11:06:58.438192  962848 fix.go:56] duration metric: took 5.156741692s for fixHost
	I1007 11:06:58.438219  962848 start.go:83] releasing machines lock for "ha-685971-m04", held for 5.15679596s
	I1007 11:06:58.438290  962848 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-685971-m04
	I1007 11:06:58.460519  962848 out.go:177] * Found network options:
	I1007 11:06:58.462956  962848 out.go:177]   - NO_PROXY=192.168.58.2,192.168.58.3
	W1007 11:06:58.465374  962848 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 11:06:58.465404  962848 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 11:06:58.465430  962848 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 11:06:58.465441  962848 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 11:06:58.465512  962848 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 11:06:58.465558  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971-m04
	I1007 11:06:58.465574  962848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 11:06:58.465631  962848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971-m04
	I1007 11:06:58.486466  962848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33952 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/ha-685971-m04/id_rsa Username:docker}
	I1007 11:06:58.496153  962848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33952 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/ha-685971-m04/id_rsa Username:docker}
	I1007 11:06:58.758769  962848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 11:06:58.763399  962848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 11:06:58.772599  962848 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1007 11:06:58.772707  962848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 11:06:58.782850  962848 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 11:06:58.782927  962848 start.go:495] detecting cgroup driver to use...
	I1007 11:06:58.782974  962848 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 11:06:58.783054  962848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 11:06:58.796239  962848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 11:06:58.807961  962848 docker.go:217] disabling cri-docker service (if available) ...
	I1007 11:06:58.808078  962848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 11:06:58.821646  962848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 11:06:58.833695  962848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 11:06:58.938565  962848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 11:06:59.040794  962848 docker.go:233] disabling docker service ...
	I1007 11:06:59.040864  962848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 11:06:59.054381  962848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 11:06:59.067339  962848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 11:06:59.167650  962848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 11:06:59.265483  962848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 11:06:59.277875  962848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 11:06:59.298165  962848 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 11:06:59.298268  962848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:06:59.310694  962848 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 11:06:59.310785  962848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:06:59.321740  962848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:06:59.333502  962848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:06:59.343203  962848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 11:06:59.352059  962848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:06:59.363386  962848 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:06:59.372860  962848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:06:59.383834  962848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 11:06:59.393233  962848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 11:06:59.403160  962848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:06:59.486137  962848 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 11:06:59.620227  962848 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 11:06:59.620378  962848 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 11:06:59.624033  962848 start.go:563] Will wait 60s for crictl version
	I1007 11:06:59.624134  962848 ssh_runner.go:195] Run: which crictl
	I1007 11:06:59.627659  962848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 11:06:59.669973  962848 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1007 11:06:59.670135  962848 ssh_runner.go:195] Run: crio --version
	I1007 11:06:59.710450  962848 ssh_runner.go:195] Run: crio --version
	I1007 11:06:59.764599  962848 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1007 11:06:59.767047  962848 out.go:177]   - env NO_PROXY=192.168.58.2
	I1007 11:06:59.769849  962848 out.go:177]   - env NO_PROXY=192.168.58.2,192.168.58.3
	I1007 11:06:59.772294  962848 cli_runner.go:164] Run: docker network inspect ha-685971 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 11:06:59.792731  962848 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1007 11:06:59.796450  962848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 11:06:59.807677  962848 mustload.go:65] Loading cluster: ha-685971
	I1007 11:06:59.807912  962848 config.go:182] Loaded profile config "ha-685971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:06:59.808159  962848 cli_runner.go:164] Run: docker container inspect ha-685971 --format={{.State.Status}}
	I1007 11:06:59.828017  962848 host.go:66] Checking if "ha-685971" exists ...
	I1007 11:06:59.828393  962848 certs.go:68] Setting up /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971 for IP: 192.168.58.5
	I1007 11:06:59.828410  962848 certs.go:194] generating shared ca certs ...
	I1007 11:06:59.828426  962848 certs.go:226] acquiring lock for ca certs: {Name:mkd5251b1f18df70f58bf1f19694372431d4d649 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:06:59.828593  962848 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19761-891319/.minikube/ca.key
	I1007 11:06:59.828649  962848 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.key
	I1007 11:06:59.828664  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 11:06:59.828678  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 11:06:59.828692  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 11:06:59.828704  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 11:06:59.828758  962848 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/896726.pem (1338 bytes)
	W1007 11:06:59.828793  962848 certs.go:480] ignoring /home/jenkins/minikube-integration/19761-891319/.minikube/certs/896726_empty.pem, impossibly tiny 0 bytes
	I1007 11:06:59.828805  962848 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 11:06:59.828830  962848 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/ca.pem (1078 bytes)
	I1007 11:06:59.828858  962848 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/cert.pem (1123 bytes)
	I1007 11:06:59.828885  962848 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/key.pem (1679 bytes)
	I1007 11:06:59.828931  962848 certs.go:484] found cert: /home/jenkins/minikube-integration/19761-891319/.minikube/files/etc/ssl/certs/8967262.pem (1708 bytes)
	I1007 11:06:59.828961  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/files/etc/ssl/certs/8967262.pem -> /usr/share/ca-certificates/8967262.pem
	I1007 11:06:59.828978  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:06:59.828989  962848 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19761-891319/.minikube/certs/896726.pem -> /usr/share/ca-certificates/896726.pem
	I1007 11:06:59.829011  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 11:06:59.856450  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 11:06:59.882759  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 11:06:59.908023  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 11:06:59.935161  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/files/etc/ssl/certs/8967262.pem --> /usr/share/ca-certificates/8967262.pem (1708 bytes)
	I1007 11:06:59.961045  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 11:06:59.989010  962848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19761-891319/.minikube/certs/896726.pem --> /usr/share/ca-certificates/896726.pem (1338 bytes)
	I1007 11:07:00.109727  962848 ssh_runner.go:195] Run: openssl version
	I1007 11:07:00.136074  962848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 11:07:00.173865  962848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:07:00.186638  962848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:07:00.186751  962848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:07:00.211343  962848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 11:07:00.234311  962848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/896726.pem && ln -fs /usr/share/ca-certificates/896726.pem /etc/ssl/certs/896726.pem"
	I1007 11:07:00.273306  962848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/896726.pem
	I1007 11:07:00.287568  962848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 10:53 /usr/share/ca-certificates/896726.pem
	I1007 11:07:00.287669  962848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/896726.pem
	I1007 11:07:00.340467  962848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/896726.pem /etc/ssl/certs/51391683.0"
	I1007 11:07:00.353912  962848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8967262.pem && ln -fs /usr/share/ca-certificates/8967262.pem /etc/ssl/certs/8967262.pem"
	I1007 11:07:00.386160  962848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8967262.pem
	I1007 11:07:00.395039  962848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 10:53 /usr/share/ca-certificates/8967262.pem
	I1007 11:07:00.395166  962848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8967262.pem
	I1007 11:07:00.422804  962848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8967262.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 11:07:00.440535  962848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 11:07:00.445753  962848 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 11:07:00.445806  962848 kubeadm.go:934] updating node {m04 192.168.58.5 0 v1.31.1  false true} ...
	I1007 11:07:00.445907  962848 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-685971-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-685971 Namespace:default APIServerHAVIP:192.168.58.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 11:07:00.445985  962848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 11:07:00.460831  962848 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 11:07:00.460969  962848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1007 11:07:00.473648  962848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1007 11:07:00.500320  962848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 11:07:00.522749  962848 ssh_runner.go:195] Run: grep 192.168.58.254	control-plane.minikube.internal$ /etc/hosts
	I1007 11:07:00.527399  962848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 11:07:00.542769  962848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:07:00.633238  962848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:07:00.648670  962848 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.58.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1007 11:07:00.649174  962848 config.go:182] Loaded profile config "ha-685971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:07:00.651709  962848 out.go:177] * Verifying Kubernetes components...
	I1007 11:07:00.653995  962848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:07:00.741807  962848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:07:00.756178  962848 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19761-891319/kubeconfig
	I1007 11:07:00.756511  962848 kapi.go:59] client config for ha-685971: &rest.Config{Host:"https://192.168.58.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/client.crt", KeyFile:"/home/jenkins/minikube-integration/19761-891319/.minikube/profiles/ha-685971/client.key", CAFile:"/home/jenkins/minikube-integration/19761-891319/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e94a20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 11:07:00.756575  962848 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.58.254:8443 with https://192.168.58.2:8443
	I1007 11:07:00.756793  962848 node_ready.go:35] waiting up to 6m0s for node "ha-685971-m04" to be "Ready" ...
	I1007 11:07:00.756858  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m04
	I1007 11:07:00.756863  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:00.756871  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:00.756876  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:00.759634  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:01.257761  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m04
	I1007 11:07:01.257785  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:01.257795  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:01.257801  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:01.260702  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:01.757004  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m04
	I1007 11:07:01.757029  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:01.757039  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:01.757044  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:01.759841  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:02.257782  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m04
	I1007 11:07:02.257803  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:02.257814  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:02.257818  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:02.261256  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:07:02.757507  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m04
	I1007 11:07:02.757531  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:02.757540  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:02.757545  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:02.761186  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:07:02.761760  962848 node_ready.go:53] node "ha-685971-m04" has status "Ready":"Unknown"
	I1007 11:07:03.257454  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m04
	I1007 11:07:03.257486  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:03.257501  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:03.257511  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:03.261522  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:07:03.757333  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m04
	I1007 11:07:03.757353  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:03.757361  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:03.757366  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:03.760226  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:04.257016  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m04
	I1007 11:07:04.257035  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:04.257044  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:04.257049  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:04.260285  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:07:04.757039  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m04
	I1007 11:07:04.757076  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:04.757088  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:04.757093  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:04.759795  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:05.257168  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m04
	I1007 11:07:05.257192  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:05.257202  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:05.257208  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:05.259908  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:05.261556  962848 node_ready.go:53] node "ha-685971-m04" has status "Ready":"Unknown"
	I1007 11:07:05.757799  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m04
	I1007 11:07:05.757826  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:05.757836  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:05.757841  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:05.760615  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:06.257704  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m04
	I1007 11:07:06.257725  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:06.257734  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:06.257739  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:06.260489  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:06.757912  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m04
	I1007 11:07:06.757938  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:06.757948  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:06.757953  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:06.760974  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:07:07.257310  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m04
	I1007 11:07:07.257338  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:07.257349  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:07.257354  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:07.265214  962848 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 11:07:07.265836  962848 node_ready.go:49] node "ha-685971-m04" has status "Ready":"True"
	I1007 11:07:07.265859  962848 node_ready.go:38] duration metric: took 6.509053873s for node "ha-685971-m04" to be "Ready" ...
	I1007 11:07:07.265870  962848 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 11:07:07.265948  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1007 11:07:07.265961  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:07.265969  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:07.265975  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:07.287622  962848 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1007 11:07:07.295224  962848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-b5fbm" in "kube-system" namespace to be "Ready" ...
	I1007 11:07:07.295350  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:07.295364  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:07.295373  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:07.295377  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:07.298295  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:07.299030  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:07.299052  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:07.299062  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:07.299067  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:07.301813  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:07.796489  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:07.796516  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:07.796526  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:07.796531  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:07.799279  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:07.800074  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:07.800095  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:07.800104  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:07.800108  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:07.802696  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:08.295419  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:08.295442  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:08.295452  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:08.295466  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:08.298396  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:08.299106  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:08.299130  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:08.299138  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:08.299144  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:08.301888  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:08.795915  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:08.795940  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:08.795951  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:08.795955  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:08.798909  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:08.799664  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:08.799684  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:08.799694  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:08.799699  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:08.802235  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:09.296069  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:09.296105  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:09.296134  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:09.296144  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:09.299642  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:07:09.300403  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:09.300425  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:09.300435  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:09.300440  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:09.303111  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:09.303805  962848 pod_ready.go:103] pod "coredns-7c65d6cfc9-b5fbm" in "kube-system" namespace has status "Ready":"False"
	I1007 11:07:09.796365  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:09.796389  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:09.796398  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:09.796403  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:09.799271  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:09.800051  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:09.800072  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:09.800082  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:09.800086  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:09.802442  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:10.295536  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:10.295564  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:10.295574  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:10.295581  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:10.298532  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:10.299288  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:10.299343  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:10.299360  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:10.299367  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:10.302062  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:10.795473  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:10.795496  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:10.795506  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:10.795511  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:10.801720  962848 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 11:07:10.802917  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:10.802936  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:10.802945  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:10.802949  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:10.805821  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:11.295459  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:11.295486  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:11.295500  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:11.295505  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:11.298519  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:11.299227  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:11.299250  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:11.299265  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:11.299268  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:11.301804  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:11.795531  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:11.795551  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:11.795560  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:11.795566  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:11.798578  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:11.799703  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:11.799760  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:11.799782  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:11.799804  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:11.802599  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:11.803617  962848 pod_ready.go:103] pod "coredns-7c65d6cfc9-b5fbm" in "kube-system" namespace has status "Ready":"False"
	I1007 11:07:12.295536  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:12.295569  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:12.295579  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:12.295583  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:12.298782  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:07:12.299517  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:12.299537  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:12.299546  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:12.299551  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:12.302308  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:12.795936  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:12.795959  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:12.795969  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:12.795973  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:12.798749  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:12.799704  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:12.799725  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:12.799735  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:12.799739  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:12.802342  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:13.295482  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:13.295506  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:13.295517  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:13.295520  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:13.298434  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:13.299299  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:13.299321  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:13.299330  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:13.299357  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:13.302007  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:13.795941  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:13.795965  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:13.795975  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:13.795980  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:13.798915  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:13.799736  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:13.799782  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:13.799799  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:13.799803  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:13.802472  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:14.295514  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:14.295538  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:14.295547  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:14.295552  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:14.298545  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:14.299220  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:14.299235  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:14.299245  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:14.299256  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:14.301899  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:14.302386  962848 pod_ready.go:103] pod "coredns-7c65d6cfc9-b5fbm" in "kube-system" namespace has status "Ready":"False"
	I1007 11:07:14.795852  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:14.795875  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:14.795885  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:14.795891  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:14.798787  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:14.799529  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:14.799546  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:14.799556  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:14.799562  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:14.802103  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:15.295922  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:15.295946  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:15.295955  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:15.295959  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:15.298989  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:07:15.299971  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:15.299998  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:15.300012  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:15.300020  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:15.302591  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:15.795410  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:15.795431  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:15.795440  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:15.795445  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:15.799331  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:07:15.800279  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:15.800295  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:15.800303  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:15.800308  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:15.802853  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:16.296044  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:16.296083  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:16.296097  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:16.296102  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:16.299081  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:16.299839  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:16.299865  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:16.299876  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:16.299880  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:16.302553  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:16.303040  962848 pod_ready.go:103] pod "coredns-7c65d6cfc9-b5fbm" in "kube-system" namespace has status "Ready":"False"
	I1007 11:07:16.795628  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:16.795655  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:16.795664  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:16.795671  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:16.798471  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:16.799325  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:16.799345  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:16.799354  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:16.799359  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:16.801796  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:17.296110  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:17.296136  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:17.296146  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:17.296151  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:17.299584  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:07:17.300750  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:17.300771  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:17.300782  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:17.300788  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:17.303436  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:17.795468  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:17.795493  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:17.795503  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:17.795509  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:17.798420  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:17.799162  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:17.799184  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:17.799193  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:17.799198  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:17.801695  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:18.295696  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:18.295728  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:18.295738  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:18.295745  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:18.299042  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:07:18.299987  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:18.300005  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:18.300019  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:18.300025  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:18.302938  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:18.303676  962848 pod_ready.go:103] pod "coredns-7c65d6cfc9-b5fbm" in "kube-system" namespace has status "Ready":"False"
	I1007 11:07:18.795900  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:18.795936  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:18.795953  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:18.795958  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:18.799108  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:07:18.800094  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:18.800112  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:18.800124  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:18.800131  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:18.802752  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:19.295599  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:19.295623  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:19.295634  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:19.295638  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:19.298560  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:19.299363  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:19.299382  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:19.299393  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:19.299399  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:19.302307  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:19.796052  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-b5fbm
	I1007 11:07:19.796074  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:19.796091  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:19.796096  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:19.798818  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:19.799587  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:19.799605  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:19.799613  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:19.799619  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:19.801905  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:19.802665  962848 pod_ready.go:98] node "ha-685971" hosting pod "coredns-7c65d6cfc9-b5fbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-685971" has status "Ready":"Unknown"
	I1007 11:07:19.802687  962848 pod_ready.go:82] duration metric: took 12.507429342s for pod "coredns-7c65d6cfc9-b5fbm" in "kube-system" namespace to be "Ready" ...
	E1007 11:07:19.802697  962848 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-685971" hosting pod "coredns-7c65d6cfc9-b5fbm" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-685971" has status "Ready":"Unknown"
	I1007 11:07:19.802704  962848 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-z86x9" in "kube-system" namespace to be "Ready" ...
	I1007 11:07:19.802770  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-z86x9
	I1007 11:07:19.802782  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:19.802790  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:19.802794  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:19.805202  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:19.805852  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:19.805872  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:19.805881  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:19.805888  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:19.808191  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:19.808942  962848 pod_ready.go:98] node "ha-685971" hosting pod "coredns-7c65d6cfc9-z86x9" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-685971" has status "Ready":"Unknown"
	I1007 11:07:19.808990  962848 pod_ready.go:82] duration metric: took 6.274191ms for pod "coredns-7c65d6cfc9-z86x9" in "kube-system" namespace to be "Ready" ...
	E1007 11:07:19.809007  962848 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-685971" hosting pod "coredns-7c65d6cfc9-z86x9" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-685971" has status "Ready":"Unknown"
	I1007 11:07:19.809016  962848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-685971" in "kube-system" namespace to be "Ready" ...
	I1007 11:07:19.809076  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685971
	I1007 11:07:19.809085  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:19.809092  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:19.809098  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:19.811359  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:19.812279  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:19.812298  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:19.812307  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:19.812314  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:19.814636  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:19.815192  962848 pod_ready.go:98] node "ha-685971" hosting pod "etcd-ha-685971" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-685971" has status "Ready":"Unknown"
	I1007 11:07:19.815221  962848 pod_ready.go:82] duration metric: took 6.197884ms for pod "etcd-ha-685971" in "kube-system" namespace to be "Ready" ...
	E1007 11:07:19.815233  962848 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-685971" hosting pod "etcd-ha-685971" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-685971" has status "Ready":"Unknown"
	I1007 11:07:19.815243  962848 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-685971-m02" in "kube-system" namespace to be "Ready" ...
	I1007 11:07:19.815315  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-685971-m02
	I1007 11:07:19.815328  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:19.815337  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:19.815341  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:19.818518  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:07:19.819329  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m02
	I1007 11:07:19.819348  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:19.819356  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:19.819361  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:19.821647  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:19.822210  962848 pod_ready.go:93] pod "etcd-ha-685971-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 11:07:19.822227  962848 pod_ready.go:82] duration metric: took 6.972017ms for pod "etcd-ha-685971-m02" in "kube-system" namespace to be "Ready" ...
	I1007 11:07:19.822260  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-685971" in "kube-system" namespace to be "Ready" ...
	I1007 11:07:19.822334  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685971
	I1007 11:07:19.822348  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:19.822356  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:19.822363  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:19.824813  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:19.825470  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:19.825488  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:19.825497  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:19.825501  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:19.827750  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:19.828240  962848 pod_ready.go:98] node "ha-685971" hosting pod "kube-apiserver-ha-685971" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-685971" has status "Ready":"Unknown"
	I1007 11:07:19.828292  962848 pod_ready.go:82] duration metric: took 6.018292ms for pod "kube-apiserver-ha-685971" in "kube-system" namespace to be "Ready" ...
	E1007 11:07:19.828303  962848 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-685971" hosting pod "kube-apiserver-ha-685971" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-685971" has status "Ready":"Unknown"
	I1007 11:07:19.828311  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-685971-m02" in "kube-system" namespace to be "Ready" ...
	I1007 11:07:19.996660  962848 request.go:632] Waited for 168.284952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685971-m02
	I1007 11:07:19.996736  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685971-m02
	I1007 11:07:19.996747  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:19.996757  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:19.996771  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:19.999781  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:20.196098  962848 request.go:632] Waited for 195.287601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971-m02
	I1007 11:07:20.196159  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m02
	I1007 11:07:20.196165  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:20.196173  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:20.196184  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:20.198992  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:20.199511  962848 pod_ready.go:93] pod "kube-apiserver-ha-685971-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 11:07:20.199530  962848 pod_ready.go:82] duration metric: took 371.211347ms for pod "kube-apiserver-ha-685971-m02" in "kube-system" namespace to be "Ready" ...
	I1007 11:07:20.199542  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-685971" in "kube-system" namespace to be "Ready" ...
	I1007 11:07:20.396505  962848 request.go:632] Waited for 196.872723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685971
	I1007 11:07:20.396570  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685971
	I1007 11:07:20.396580  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:20.396589  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:20.396597  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:20.399883  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:07:20.596131  962848 request.go:632] Waited for 195.159921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:20.596220  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:20.596230  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:20.596238  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:20.596265  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:20.599021  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:20.599656  962848 pod_ready.go:98] node "ha-685971" hosting pod "kube-controller-manager-ha-685971" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-685971" has status "Ready":"Unknown"
	I1007 11:07:20.599680  962848 pod_ready.go:82] duration metric: took 400.125671ms for pod "kube-controller-manager-ha-685971" in "kube-system" namespace to be "Ready" ...
	E1007 11:07:20.599691  962848 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-685971" hosting pod "kube-controller-manager-ha-685971" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-685971" has status "Ready":"Unknown"
	I1007 11:07:20.599699  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-685971-m02" in "kube-system" namespace to be "Ready" ...
	I1007 11:07:20.796603  962848 request.go:632] Waited for 196.834725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685971-m02
	I1007 11:07:20.796670  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-685971-m02
	I1007 11:07:20.796682  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:20.796691  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:20.796699  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:20.799601  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:20.996910  962848 request.go:632] Waited for 196.326093ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971-m02
	I1007 11:07:20.996987  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m02
	I1007 11:07:20.996997  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:20.997006  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:20.997010  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:20.999729  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:21.000308  962848 pod_ready.go:93] pod "kube-controller-manager-ha-685971-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 11:07:21.000329  962848 pod_ready.go:82] duration metric: took 400.623563ms for pod "kube-controller-manager-ha-685971-m02" in "kube-system" namespace to be "Ready" ...
	I1007 11:07:21.000342  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4frrm" in "kube-system" namespace to be "Ready" ...
	I1007 11:07:21.196721  962848 request.go:632] Waited for 196.311209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4frrm
	I1007 11:07:21.196811  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4frrm
	I1007 11:07:21.196826  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:21.196836  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:21.196846  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:21.199777  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:21.396820  962848 request.go:632] Waited for 196.367528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971-m02
	I1007 11:07:21.396883  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m02
	I1007 11:07:21.396890  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:21.396906  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:21.396917  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:21.400029  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:07:21.400679  962848 pod_ready.go:93] pod "kube-proxy-4frrm" in "kube-system" namespace has status "Ready":"True"
	I1007 11:07:21.400704  962848 pod_ready.go:82] duration metric: took 400.352927ms for pod "kube-proxy-4frrm" in "kube-system" namespace to be "Ready" ...
	I1007 11:07:21.400716  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-787s7" in "kube-system" namespace to be "Ready" ...
	I1007 11:07:21.596288  962848 request.go:632] Waited for 195.454295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-787s7
	I1007 11:07:21.596365  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-787s7
	I1007 11:07:21.596376  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:21.596384  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:21.596390  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:21.599366  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:21.796426  962848 request.go:632] Waited for 196.346811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:21.796531  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:21.796569  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:21.796585  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:21.796596  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:21.799298  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:21.799880  962848 pod_ready.go:98] node "ha-685971" hosting pod "kube-proxy-787s7" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-685971" has status "Ready":"Unknown"
	I1007 11:07:21.799905  962848 pod_ready.go:82] duration metric: took 399.172956ms for pod "kube-proxy-787s7" in "kube-system" namespace to be "Ready" ...
	E1007 11:07:21.799915  962848 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-685971" hosting pod "kube-proxy-787s7" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-685971" has status "Ready":"Unknown"
	I1007 11:07:21.799923  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bdxpj" in "kube-system" namespace to be "Ready" ...
	I1007 11:07:21.996123  962848 request.go:632] Waited for 196.127136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bdxpj
	I1007 11:07:21.996239  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bdxpj
	I1007 11:07:21.996283  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:21.996305  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:21.996327  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:21.999177  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:22.197060  962848 request.go:632] Waited for 197.258394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971-m04
	I1007 11:07:22.197171  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m04
	I1007 11:07:22.197182  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:22.197191  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:22.197208  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:22.200087  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:22.200635  962848 pod_ready.go:93] pod "kube-proxy-bdxpj" in "kube-system" namespace has status "Ready":"True"
	I1007 11:07:22.200656  962848 pod_ready.go:82] duration metric: took 400.722935ms for pod "kube-proxy-bdxpj" in "kube-system" namespace to be "Ready" ...
	I1007 11:07:22.200669  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-685971" in "kube-system" namespace to be "Ready" ...
	I1007 11:07:22.396587  962848 request.go:632] Waited for 195.842551ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685971
	I1007 11:07:22.396654  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685971
	I1007 11:07:22.396663  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:22.396672  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:22.396681  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:22.400018  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:07:22.596051  962848 request.go:632] Waited for 195.297373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:22.596125  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971
	I1007 11:07:22.596135  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:22.596144  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:22.596149  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:22.599101  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:22.599697  962848 pod_ready.go:98] node "ha-685971" hosting pod "kube-scheduler-ha-685971" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-685971" has status "Ready":"Unknown"
	I1007 11:07:22.599720  962848 pod_ready.go:82] duration metric: took 399.0395ms for pod "kube-scheduler-ha-685971" in "kube-system" namespace to be "Ready" ...
	E1007 11:07:22.599733  962848 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-685971" hosting pod "kube-scheduler-ha-685971" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-685971" has status "Ready":"Unknown"
	I1007 11:07:22.599742  962848 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-685971-m02" in "kube-system" namespace to be "Ready" ...
	I1007 11:07:22.796084  962848 request.go:632] Waited for 196.266155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685971-m02
	I1007 11:07:22.796158  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-685971-m02
	I1007 11:07:22.796173  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:22.796182  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:22.796197  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:22.798984  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:22.996946  962848 request.go:632] Waited for 197.341514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/ha-685971-m02
	I1007 11:07:22.997006  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/ha-685971-m02
	I1007 11:07:22.997012  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:22.997021  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:22.997029  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:22.999836  962848 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 11:07:23.000625  962848 pod_ready.go:93] pod "kube-scheduler-ha-685971-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 11:07:23.000648  962848 pod_ready.go:82] duration metric: took 400.89444ms for pod "kube-scheduler-ha-685971-m02" in "kube-system" namespace to be "Ready" ...
	I1007 11:07:23.000661  962848 pod_ready.go:39] duration metric: took 15.734780599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 11:07:23.000676  962848 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 11:07:23.000739  962848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 11:07:23.015135  962848 system_svc.go:56] duration metric: took 14.450227ms WaitForService to wait for kubelet
	I1007 11:07:23.015218  962848 kubeadm.go:582] duration metric: took 22.366498962s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 11:07:23.015252  962848 node_conditions.go:102] verifying NodePressure condition ...
	I1007 11:07:23.196717  962848 request.go:632] Waited for 181.352712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1007 11:07:23.196841  962848 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1007 11:07:23.196858  962848 round_trippers.go:469] Request Headers:
	I1007 11:07:23.196867  962848 round_trippers.go:473]     Accept: application/json, */*
	I1007 11:07:23.196871  962848 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1007 11:07:23.200475  962848 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 11:07:23.201906  962848 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 11:07:23.201934  962848 node_conditions.go:123] node cpu capacity is 2
	I1007 11:07:23.201946  962848 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 11:07:23.201951  962848 node_conditions.go:123] node cpu capacity is 2
	I1007 11:07:23.201955  962848 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 11:07:23.201959  962848 node_conditions.go:123] node cpu capacity is 2
	I1007 11:07:23.201965  962848 node_conditions.go:105] duration metric: took 186.689328ms to run NodePressure ...
	I1007 11:07:23.201979  962848 start.go:241] waiting for startup goroutines ...
	I1007 11:07:23.202004  962848 start.go:255] writing updated cluster config ...
	I1007 11:07:23.202332  962848 ssh_runner.go:195] Run: rm -f paused
	I1007 11:07:23.269362  962848 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 11:07:23.272322  962848 out.go:177] * Done! kubectl is now configured to use "ha-685971" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 11:06:45 ha-685971 crio[644]: time="2024-10-07 11:06:45.156324988Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f2fa5cf29829c7f4efc22ad4eee07a11b7d6e0ed00d78336601cf4a6ca1ebc51/merged/etc/group: no such file or directory"
	Oct 07 11:06:45 ha-685971 crio[644]: time="2024-10-07 11:06:45.234473369Z" level=info msg="Created container 357fb00a27dd915dfd1ce7054a63ba9997900dcdf0d887aa3052abae46f9c7f1: kube-system/kube-vip-ha-685971/kube-vip" id=452ff08d-d75c-4d78-bee3-716544b941ea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 07 11:06:45 ha-685971 crio[644]: time="2024-10-07 11:06:45.235405687Z" level=info msg="Starting container: 357fb00a27dd915dfd1ce7054a63ba9997900dcdf0d887aa3052abae46f9c7f1" id=6fd0347a-e638-46d5-bea1-975dbfe2f312 name=/runtime.v1.RuntimeService/StartContainer
	Oct 07 11:06:45 ha-685971 crio[644]: time="2024-10-07 11:06:45.252171676Z" level=info msg="Started container" PID=1844 containerID=357fb00a27dd915dfd1ce7054a63ba9997900dcdf0d887aa3052abae46f9c7f1 description=kube-system/kube-vip-ha-685971/kube-vip id=6fd0347a-e638-46d5-bea1-975dbfe2f312 name=/runtime.v1.RuntimeService/StartContainer sandboxID=da29bf7529b01aacf35351347a96b217baeba0a64c8f651b3094e4635d7d8346
	Oct 07 11:06:46 ha-685971 crio[644]: time="2024-10-07 11:06:46.843626932Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.1" id=6a920034-f8be-47d3-a982-3af3463d0968 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 11:06:46 ha-685971 crio[644]: time="2024-10-07 11:06:46.843852177Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1 registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=6a920034-f8be-47d3-a982-3af3463d0968 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 11:06:46 ha-685971 crio[644]: time="2024-10-07 11:06:46.844879404Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.1" id=d5d139cd-57dc-41c9-b939-e416dcdc1285 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 11:06:46 ha-685971 crio[644]: time="2024-10-07 11:06:46.845073700Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1 registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=d5d139cd-57dc-41c9-b939-e416dcdc1285 name=/runtime.v1.ImageService/ImageStatus
	Oct 07 11:06:46 ha-685971 crio[644]: time="2024-10-07 11:06:46.845854445Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-685971/kube-controller-manager" id=4da4132f-725e-469f-a2e6-13abf63278d3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 07 11:06:46 ha-685971 crio[644]: time="2024-10-07 11:06:46.845971589Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 07 11:06:46 ha-685971 crio[644]: time="2024-10-07 11:06:46.921952853Z" level=info msg="Created container d4c2148e2ad7b50c437d1a8547e663bc8a6ee2c96b7cd8b27e6abcf594c304c9: kube-system/kube-controller-manager-ha-685971/kube-controller-manager" id=4da4132f-725e-469f-a2e6-13abf63278d3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 07 11:06:46 ha-685971 crio[644]: time="2024-10-07 11:06:46.922642424Z" level=info msg="Starting container: d4c2148e2ad7b50c437d1a8547e663bc8a6ee2c96b7cd8b27e6abcf594c304c9" id=850c7c47-63ea-41ea-96c5-e0f3a8a7dd41 name=/runtime.v1.RuntimeService/StartContainer
	Oct 07 11:06:46 ha-685971 crio[644]: time="2024-10-07 11:06:46.931786538Z" level=info msg="Started container" PID=1886 containerID=d4c2148e2ad7b50c437d1a8547e663bc8a6ee2c96b7cd8b27e6abcf594c304c9 description=kube-system/kube-controller-manager-ha-685971/kube-controller-manager id=850c7c47-63ea-41ea-96c5-e0f3a8a7dd41 name=/runtime.v1.RuntimeService/StartContainer sandboxID=78e1e9a83016d04dd3dfedef793d74cbab1e5d28663765259816bebcb41adef0
	Oct 07 11:06:54 ha-685971 crio[644]: time="2024-10-07 11:06:54.001527807Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Oct 07 11:06:54 ha-685971 crio[644]: time="2024-10-07 11:06:54.006767158Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 07 11:06:54 ha-685971 crio[644]: time="2024-10-07 11:06:54.006813566Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 07 11:06:54 ha-685971 crio[644]: time="2024-10-07 11:06:54.006837656Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Oct 07 11:06:54 ha-685971 crio[644]: time="2024-10-07 11:06:54.012365924Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 07 11:06:54 ha-685971 crio[644]: time="2024-10-07 11:06:54.012407311Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 07 11:06:54 ha-685971 crio[644]: time="2024-10-07 11:06:54.012426732Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Oct 07 11:06:54 ha-685971 crio[644]: time="2024-10-07 11:06:54.017165565Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 07 11:06:54 ha-685971 crio[644]: time="2024-10-07 11:06:54.018151906Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 07 11:06:54 ha-685971 crio[644]: time="2024-10-07 11:06:54.018204213Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Oct 07 11:06:54 ha-685971 crio[644]: time="2024-10-07 11:06:54.022843322Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 07 11:06:54 ha-685971 crio[644]: time="2024-10-07 11:06:54.022880671Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d4c2148e2ad7b       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   38 seconds ago       Running             kube-controller-manager   6                   78e1e9a83016d       kube-controller-manager-ha-685971
	357fb00a27dd9       4eadde00b6c50b581474eaa28b09bfcdd974ccaab8bafac22b08e7d2ecd66fc1   40 seconds ago       Running             kube-vip                  3                   da29bf7529b01       kube-vip-ha-685971
	4e7bf78fce0c3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   41 seconds ago       Running             storage-provisioner       4                   34cf90434c500       storage-provisioner
	dfeeb88155731       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   44 seconds ago       Running             kube-apiserver            4                   90bb2719dbde6       kube-apiserver-ha-685971
	4c4092c4be5f2       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   About a minute ago   Running             coredns                   2                   acdd048e85223       coredns-7c65d6cfc9-z86x9
	919f24294c177       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   About a minute ago   Running             busybox                   2                   fca3e5b8979b9       busybox-7dff88458-qfq6v
	f62d614be3c8d       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d   About a minute ago   Running             kube-proxy                2                   d1c2069f4aa4e       kube-proxy-787s7
	ad1e6a39be6a5       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51   About a minute ago   Running             kindnet-cni               2                   6021cab868f17       kindnet-hgwsj
	49d9c506c8bad       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Exited              storage-provisioner       3                   34cf90434c500       storage-provisioner
	5fa61f41981d1       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   About a minute ago   Running             coredns                   2                   dd30a0c930b84       coredns-7c65d6cfc9-b5fbm
	20b324bcf7322       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   About a minute ago   Exited              kube-controller-manager   5                   78e1e9a83016d       kube-controller-manager-ha-685971
	bb023efc590b7       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da   About a minute ago   Running             etcd                      2                   127abbb5a6b34       etcd-ha-685971
	037b4ef0cd47c       4eadde00b6c50b581474eaa28b09bfcdd974ccaab8bafac22b08e7d2ecd66fc1   About a minute ago   Exited              kube-vip                  2                   da29bf7529b01       kube-vip-ha-685971
	ba63d93c3ecc3       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   About a minute ago   Exited              kube-apiserver            3                   90bb2719dbde6       kube-apiserver-ha-685971
	2396dbc7b7c78       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d   About a minute ago   Running             kube-scheduler            2                   1a5e783ea90b8       kube-scheduler-ha-685971
	
	
	==> coredns [4c4092c4be5f2b699abf3b6861422fa68e4ad2beac895d2299bca1bced1732eb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 75e5db48a73272e2c90919c8256e5cca0293ae0ed689e2ed44f1254a9589c3d004cb3e693d059116718c47e9305987b828b11b2735a1cefa59e4a9489dda5cee
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51337 - 12717 "HINFO IN 3760912023204947925.1189951863467072751. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028349029s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[883873554]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 11:06:13.814) (total time: 30000ms):
	Trace[883873554]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:06:43.814)
	Trace[883873554]: [30.000780768s] [30.000780768s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[613945017]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 11:06:13.814) (total time: 30000ms):
	Trace[613945017]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:06:43.815)
	Trace[613945017]: [30.000786742s] [30.000786742s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[174614199]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 11:06:13.814) (total time: 30000ms):
	Trace[174614199]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:06:43.815)
	Trace[174614199]: [30.000737322s] [30.000737322s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [5fa61f41981d1a3c71e1617b216cd985eaf17da38b7dc7dd0cd62ad68962e474] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 75e5db48a73272e2c90919c8256e5cca0293ae0ed689e2ed44f1254a9589c3d004cb3e693d059116718c47e9305987b828b11b2735a1cefa59e4a9489dda5cee
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51733 - 35381 "HINFO IN 4569573857002565025.1832094915500994681. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015090947s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[839634633]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 11:06:13.634) (total time: 30001ms):
	Trace[839634633]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:06:43.635)
	Trace[839634633]: [30.001933072s] [30.001933072s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[643354408]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 11:06:13.635) (total time: 30001ms):
	Trace[643354408]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:06:43.636)
	Trace[643354408]: [30.001511487s] [30.001511487s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[685408715]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 11:06:13.635) (total time: 30002ms):
	Trace[685408715]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (11:06:43.636)
	Trace[685408715]: [30.002008493s] [30.002008493s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-685971
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-685971
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=ha-685971
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T10_57_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:57:11 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685971
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 11:06:35 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 07 Oct 2024 11:06:05 +0000   Mon, 07 Oct 2024 11:07:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 07 Oct 2024 11:06:05 +0000   Mon, 07 Oct 2024 11:07:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 07 Oct 2024 11:06:05 +0000   Mon, 07 Oct 2024 11:07:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 07 Oct 2024 11:06:05 +0000   Mon, 07 Oct 2024 11:07:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    ha-685971
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 08421a18382a48d79da42d6d71a3c835
	  System UUID:                77a900bf-1517-47c5-81a7-691bf06817d6
	  Boot ID:                    9a8fefe6-3962-4cb9-809a-2b740ac8992f
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qfq6v              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 coredns-7c65d6cfc9-b5fbm             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 coredns-7c65d6cfc9-z86x9             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 etcd-ha-685971                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-hgwsj                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-685971             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-685971    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-787s7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-685971             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-685971                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 71s                    kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 4m29s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-685971 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-685971 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-685971 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node ha-685971 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node ha-685971 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node ha-685971 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                    node-controller  Node ha-685971 event: Registered Node ha-685971 in Controller
	  Normal   NodeReady                9m57s                  kubelet          Node ha-685971 status is now: NodeReady
	  Normal   RegisteredNode           9m39s                  node-controller  Node ha-685971 event: Registered Node ha-685971 in Controller
	  Normal   RegisteredNode           8m33s                  node-controller  Node ha-685971 event: Registered Node ha-685971 in Controller
	  Normal   NodeHasNoDiskPressure    5m24s (x8 over 5m24s)  kubelet          Node ha-685971 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m24s (x7 over 5m24s)  kubelet          Node ha-685971 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m24s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m24s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m24s (x8 over 5m24s)  kubelet          Node ha-685971 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           4m46s                  node-controller  Node ha-685971 event: Registered Node ha-685971 in Controller
	  Normal   RegisteredNode           4m22s                  node-controller  Node ha-685971 event: Registered Node ha-685971 in Controller
	  Normal   RegisteredNode           3m31s                  node-controller  Node ha-685971 event: Registered Node ha-685971 in Controller
	  Normal   Starting                 119s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 119s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  119s (x8 over 119s)    kubelet          Node ha-685971 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    119s (x8 over 119s)    kubelet          Node ha-685971 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     119s (x7 over 119s)    kubelet          Node ha-685971 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           82s                    node-controller  Node ha-685971 event: Registered Node ha-685971 in Controller
	  Normal   RegisteredNode           36s                    node-controller  Node ha-685971 event: Registered Node ha-685971 in Controller
	  Normal   NodeNotReady             7s                     node-controller  Node ha-685971 status is now: NodeNotReady
	
	
	Name:               ha-685971-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-685971-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=ha-685971
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T10_57_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 10:57:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685971-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 11:07:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 11:05:57 +0000   Mon, 07 Oct 2024 10:57:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 11:05:57 +0000   Mon, 07 Oct 2024 10:57:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 11:05:57 +0000   Mon, 07 Oct 2024 10:57:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 11:05:57 +0000   Mon, 07 Oct 2024 10:58:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    ha-685971-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a100ab38c66446a9ebe0b929a322460
	  System UUID:                1164dd5d-67e1-4e83-a4ca-68bbdb572a5a
	  Boot ID:                    9a8fefe6-3962-4cb9-809a-2b740ac8992f
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w84nw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 etcd-ha-685971-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9m48s
	  kube-system                 kindnet-wn9mj                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m50s
	  kube-system                 kube-apiserver-ha-685971-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m48s
	  kube-system                 kube-controller-manager-ha-685971-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m45s
	  kube-system                 kube-proxy-4frrm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m50s
	  kube-system                 kube-scheduler-ha-685971-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m41s
	  kube-system                 kube-vip-ha-685971-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 74s                    kube-proxy       
	  Normal   Starting                 9m43s                  kube-proxy       
	  Normal   Starting                 5m55s                  kube-proxy       
	  Normal   Starting                 4m26s                  kube-proxy       
	  Normal   NodeHasSufficientPID     9m50s (x7 over 9m50s)  kubelet          Node ha-685971-m02 status is now: NodeHasSufficientPID
	  Normal   CIDRAssignmentFailed     9m50s                  cidrAllocator    Node ha-685971-m02 status is now: CIDRAssignmentFailed
	  Warning  CgroupV1                 9m50s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  9m50s (x8 over 9m50s)  kubelet          Node ha-685971-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m50s (x8 over 9m50s)  kubelet          Node ha-685971-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           9m45s                  node-controller  Node ha-685971-m02 event: Registered Node ha-685971-m02 in Controller
	  Normal   RegisteredNode           9m39s                  node-controller  Node ha-685971-m02 event: Registered Node ha-685971-m02 in Controller
	  Normal   RegisteredNode           8m33s                  node-controller  Node ha-685971-m02 event: Registered Node ha-685971-m02 in Controller
	  Normal   NodeHasSufficientPID     6m30s (x7 over 6m30s)  kubelet          Node ha-685971-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    6m30s (x8 over 6m30s)  kubelet          Node ha-685971-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 6m30s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m30s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m30s (x8 over 6m30s)  kubelet          Node ha-685971-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     5m22s (x7 over 5m22s)  kubelet          Node ha-685971-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m22s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m22s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m22s (x8 over 5m22s)  kubelet          Node ha-685971-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m22s (x8 over 5m22s)  kubelet          Node ha-685971-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           4m46s                  node-controller  Node ha-685971-m02 event: Registered Node ha-685971-m02 in Controller
	  Normal   RegisteredNode           4m22s                  node-controller  Node ha-685971-m02 event: Registered Node ha-685971-m02 in Controller
	  Normal   RegisteredNode           3m31s                  node-controller  Node ha-685971-m02 event: Registered Node ha-685971-m02 in Controller
	  Normal   Starting                 117s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 117s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  117s (x8 over 117s)    kubelet          Node ha-685971-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    117s (x8 over 117s)    kubelet          Node ha-685971-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     117s (x7 over 117s)    kubelet          Node ha-685971-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           82s                    node-controller  Node ha-685971-m02 event: Registered Node ha-685971-m02 in Controller
	  Normal   RegisteredNode           36s                    node-controller  Node ha-685971-m02 event: Registered Node ha-685971-m02 in Controller
	
	
	Name:               ha-685971-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-685971-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b06d4f2a8ccec01d969e2c3d6aacc70438e6b0f
	                    minikube.k8s.io/name=ha-685971
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T11_00_03_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 11:00:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-685971-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 11:07:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 11:07:07 +0000   Mon, 07 Oct 2024 11:07:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 11:07:07 +0000   Mon, 07 Oct 2024 11:07:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 11:07:07 +0000   Mon, 07 Oct 2024 11:07:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 11:07:07 +0000   Mon, 07 Oct 2024 11:07:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.5
	  Hostname:    ha-685971-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 e26f51d9aa61422a8654a8f8d7f74be7
	  System UUID:                65bdac46-bae0-4a5c-b8c8-3668021afe74
	  Boot ID:                    9a8fefe6-3962-4cb9-809a-2b740ac8992f
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4rqh2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  kube-system                 kindnet-l95rf              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m24s
	  kube-system                 kube-proxy-bdxpj           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m22s                  kube-proxy       
	  Normal   Starting                 14s                    kube-proxy       
	  Normal   Starting                 2m57s                  kube-proxy       
	  Warning  CgroupV1                 7m25s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   CIDRAssignmentFailed     7m24s                  cidrAllocator    Node ha-685971-m04 status is now: CIDRAssignmentFailed
	  Normal   CIDRAssignmentFailed     7m24s                  cidrAllocator    Node ha-685971-m04 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientPID     7m24s (x2 over 7m25s)  kubelet          Node ha-685971-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    7m24s (x2 over 7m25s)  kubelet          Node ha-685971-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  7m24s (x2 over 7m25s)  kubelet          Node ha-685971-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           7m24s                  node-controller  Node ha-685971-m04 event: Registered Node ha-685971-m04 in Controller
	  Normal   RegisteredNode           7m23s                  node-controller  Node ha-685971-m04 event: Registered Node ha-685971-m04 in Controller
	  Normal   RegisteredNode           7m20s                  node-controller  Node ha-685971-m04 event: Registered Node ha-685971-m04 in Controller
	  Normal   NodeReady                7m12s                  kubelet          Node ha-685971-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m46s                  node-controller  Node ha-685971-m04 event: Registered Node ha-685971-m04 in Controller
	  Normal   RegisteredNode           4m22s                  node-controller  Node ha-685971-m04 event: Registered Node ha-685971-m04 in Controller
	  Normal   NodeNotReady             4m6s                   node-controller  Node ha-685971-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m31s                  node-controller  Node ha-685971-m04 event: Registered Node ha-685971-m04 in Controller
	  Warning  CgroupV1                 3m19s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 3m19s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     3m12s (x7 over 3m19s)  kubelet          Node ha-685971-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  3m6s (x8 over 3m19s)   kubelet          Node ha-685971-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m6s (x8 over 3m19s)   kubelet          Node ha-685971-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           82s                    node-controller  Node ha-685971-m04 event: Registered Node ha-685971-m04 in Controller
	  Normal   NodeNotReady             42s                    node-controller  Node ha-685971-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           36s                    node-controller  Node ha-685971-m04 event: Registered Node ha-685971-m04 in Controller
	  Normal   Starting                 32s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 32s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     26s (x7 over 32s)      kubelet          Node ha-685971-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  19s (x8 over 32s)      kubelet          Node ha-685971-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19s (x8 over 32s)      kubelet          Node ha-685971-m04 status is now: NodeHasNoDiskPressure
	
	
	==> dmesg <==
	
	
	==> etcd [bb023efc590b77d3c621c2e03f941173f3f05a94f320a7ea24e1e36bd709fca9] <==
	{"level":"warn","ts":"2024-10-07T11:05:56.965189Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T11:05:52.497661Z","time spent":"4.467522165s","remote":"127.0.0.1:52222","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":3,"response size":18037,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:500 "}
	{"level":"warn","ts":"2024-10-07T11:05:56.965457Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.509473034s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" limit:500 ","response":"range_response_count:67 size:60555"}
	{"level":"info","ts":"2024-10-07T11:05:56.965487Z","caller":"traceutil/trace.go:171","msg":"trace[1335992784] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:67; response_revision:2501; }","duration":"4.509505182s","start":"2024-10-07T11:05:52.455976Z","end":"2024-10-07T11:05:56.965481Z","steps":["trace[1335992784] 'agreement among raft nodes before linearized reading'  (duration: 4.509293466s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:05:56.965505Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T11:05:52.455944Z","time spent":"4.50955506s","remote":"127.0.0.1:52394","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":67,"response size":60578,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" limit:500 "}
	{"level":"warn","ts":"2024-10-07T11:05:56.965696Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.512532797s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" limit:500 ","response":"range_response_count:21 size:20190"}
	{"level":"info","ts":"2024-10-07T11:05:56.965724Z","caller":"traceutil/trace.go:171","msg":"trace[970756542] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:21; response_revision:2501; }","duration":"4.512563747s","start":"2024-10-07T11:05:52.453154Z","end":"2024-10-07T11:05:56.965718Z","steps":["trace[970756542] 'agreement among raft nodes before linearized reading'  (duration: 4.512452018s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:05:56.965742Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T11:05:52.453116Z","time spent":"4.51262064s","remote":"127.0.0.1:52604","response type":"/etcdserverpb.KV/Range","request count":0,"request size":97,"response count":21,"response size":20213,"request content":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" limit:500 "}
	{"level":"warn","ts":"2024-10-07T11:05:56.965887Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.548880141s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" ","response":"range_response_count:1 size:442"}
	{"level":"info","ts":"2024-10-07T11:05:56.965911Z","caller":"traceutil/trace.go:171","msg":"trace[1923245144] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:2501; }","duration":"4.548904633s","start":"2024-10-07T11:05:52.417001Z","end":"2024-10-07T11:05:56.965906Z","steps":["trace[1923245144] 'agreement among raft nodes before linearized reading'  (duration: 4.548842241s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:05:56.965928Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T11:05:52.416987Z","time spent":"4.548935722s","remote":"127.0.0.1:52410","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":465,"request content":"key:\"/registry/priorityclasses/system-node-critical\" "}
	{"level":"warn","ts":"2024-10-07T11:05:56.966014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.549124192s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T11:05:56.966036Z","caller":"traceutil/trace.go:171","msg":"trace[1284423104] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:2501; }","duration":"4.549146781s","start":"2024-10-07T11:05:52.416885Z","end":"2024-10-07T11:05:56.966031Z","steps":["trace[1284423104] 'agreement among raft nodes before linearized reading'  (duration: 4.549114347s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:05:56.966051Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T11:05:52.416839Z","time spent":"4.549207515s","remote":"127.0.0.1:52394","response type":"/etcdserverpb.KV/Range","request count":0,"request size":26,"response count":0,"response size":28,"request content":"key:\"/registry/clusterroles\" limit:1 "}
	{"level":"warn","ts":"2024-10-07T11:05:56.966128Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.55225356s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:500 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T11:05:56.966149Z","caller":"traceutil/trace.go:171","msg":"trace[1889653917] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; response_count:0; response_revision:2501; }","duration":"4.552275771s","start":"2024-10-07T11:05:52.413867Z","end":"2024-10-07T11:05:56.966143Z","steps":["trace[1889653917] 'agreement among raft nodes before linearized reading'  (duration: 4.552244206s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:05:56.966164Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T11:05:52.413822Z","time spent":"4.552337481s","remote":"127.0.0.1:52586","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":28,"request content":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:500 "}
	{"level":"warn","ts":"2024-10-07T11:05:56.966280Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.568721228s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:500 ","response":"range_response_count:2 size:1908"}
	{"level":"info","ts":"2024-10-07T11:05:56.966301Z","caller":"traceutil/trace.go:171","msg":"trace[10272436] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:2; response_revision:2501; }","duration":"4.568745088s","start":"2024-10-07T11:05:52.397552Z","end":"2024-10-07T11:05:56.966297Z","steps":["trace[10272436] 'agreement among raft nodes before linearized reading'  (duration: 4.568681877s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:05:56.966319Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T11:05:52.397512Z","time spent":"4.568799988s","remote":"127.0.0.1:52242","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":2,"response size":1931,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:500 "}
	{"level":"warn","ts":"2024-10-07T11:05:56.966436Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.200164344s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/apiserver-53gopw5zu47xjd3k6eabepxkse\" ","response":"range_response_count:1 size:688"}
	{"level":"info","ts":"2024-10-07T11:05:56.966459Z","caller":"traceutil/trace.go:171","msg":"trace[1361198522] range","detail":"{range_begin:/registry/leases/kube-system/apiserver-53gopw5zu47xjd3k6eabepxkse; range_end:; response_count:1; response_revision:2501; }","duration":"5.200188909s","start":"2024-10-07T11:05:51.766265Z","end":"2024-10-07T11:05:56.966453Z","steps":["trace[1361198522] 'agreement among raft nodes before linearized reading'  (duration: 5.200125476s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:05:56.966476Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T11:05:51.766226Z","time spent":"5.200243982s","remote":"127.0.0.1:52314","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":711,"request content":"key:\"/registry/leases/kube-system/apiserver-53gopw5zu47xjd3k6eabepxkse\" "}
	{"level":"warn","ts":"2024-10-07T11:05:56.966579Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.380084525s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-685971-m02\" ","response":"range_response_count:1 size:6102"}
	{"level":"info","ts":"2024-10-07T11:05:56.966601Z","caller":"traceutil/trace.go:171","msg":"trace[1234262840] range","detail":"{range_begin:/registry/minions/ha-685971-m02; range_end:; response_count:1; response_revision:2501; }","duration":"5.380106819s","start":"2024-10-07T11:05:51.586489Z","end":"2024-10-07T11:05:56.966596Z","steps":["trace[1234262840] 'agreement among raft nodes before linearized reading'  (duration: 5.380054224s)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:05:56.966615Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T11:05:51.586456Z","time spent":"5.380155598s","remote":"127.0.0.1:52222","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":6125,"request content":"key:\"/registry/minions/ha-685971-m02\" "}
	
	
	==> kernel <==
	 11:07:26 up  6:49,  0 users,  load average: 1.73, 2.46, 2.07
	Linux ha-685971 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [ad1e6a39be6a5ef9f9d38d6df0a5540c6443f397d9e31beb5da254b537dcb73e] <==
	I1007 11:06:54.000780       1 main.go:322] Node ha-685971-m02 has CIDR [10.244.1.0/24] 
	I1007 11:06:54.000951       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1007 11:06:54.001062       1 main.go:295] Handling node with IPs: map[192.168.58.5:{}]
	I1007 11:06:54.001103       1 main.go:322] Node ha-685971-m04 has CIDR [10.244.4.0/24] 
	I1007 11:06:54.001177       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 192.168.58.5 Flags: [] Table: 0} 
	I1007 11:06:54.001246       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 11:06:54.001282       1 main.go:299] handling current node
	I1007 11:07:04.006904       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 11:07:04.006947       1 main.go:299] handling current node
	I1007 11:07:04.006964       1 main.go:295] Handling node with IPs: map[192.168.58.3:{}]
	I1007 11:07:04.006970       1 main.go:322] Node ha-685971-m02 has CIDR [10.244.1.0/24] 
	I1007 11:07:04.007131       1 main.go:295] Handling node with IPs: map[192.168.58.5:{}]
	I1007 11:07:04.007150       1 main.go:322] Node ha-685971-m04 has CIDR [10.244.4.0/24] 
	I1007 11:07:13.999074       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 11:07:13.999111       1 main.go:299] handling current node
	I1007 11:07:13.999129       1 main.go:295] Handling node with IPs: map[192.168.58.3:{}]
	I1007 11:07:13.999135       1 main.go:322] Node ha-685971-m02 has CIDR [10.244.1.0/24] 
	I1007 11:07:13.999247       1 main.go:295] Handling node with IPs: map[192.168.58.5:{}]
	I1007 11:07:13.999260       1 main.go:322] Node ha-685971-m04 has CIDR [10.244.4.0/24] 
	I1007 11:07:24.000328       1 main.go:295] Handling node with IPs: map[192.168.58.2:{}]
	I1007 11:07:24.000503       1 main.go:299] handling current node
	I1007 11:07:24.000546       1 main.go:295] Handling node with IPs: map[192.168.58.3:{}]
	I1007 11:07:24.000592       1 main.go:322] Node ha-685971-m02 has CIDR [10.244.1.0/24] 
	I1007 11:07:24.000771       1 main.go:295] Handling node with IPs: map[192.168.58.5:{}]
	I1007 11:07:24.000812       1 main.go:322] Node ha-685971-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [ba63d93c3ecc3afb980da72352f9622e99453d3e04db33296277356220166fdd] <==
	E1007 11:05:51.607617       1 cacher.go:478] cacher (services): unexpected ListAndWatch error: failed to list *core.Service: etcdserver: request timed out; reinitializing...
	W1007 11:05:51.607866       1 reflector.go:561] storage/cacher.go:/serviceaccounts: failed to list *core.ServiceAccount: etcdserver: request timed out
	E1007 11:05:51.607917       1 cacher.go:478] cacher (serviceaccounts): unexpected ListAndWatch error: failed to list *core.ServiceAccount: etcdserver: request timed out; reinitializing...
	W1007 11:05:56.922688       1 reflector.go:561] storage/cacher.go:/apiextensions.k8s.io/customresourcedefinitions: failed to list *apiextensions.CustomResourceDefinition: etcdserver: leader changed
	E1007 11:05:56.922716       1 cacher.go:478] cacher (customresourcedefinitions.apiextensions.k8s.io): unexpected ListAndWatch error: failed to list *apiextensions.CustomResourceDefinition: etcdserver: leader changed; reinitializing...
	I1007 11:05:57.020789       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1007 11:05:57.020896       1 policy_source.go:224] refreshing policies
	I1007 11:05:57.031065       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1007 11:05:57.032004       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1007 11:05:57.038854       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1007 11:05:57.039040       1 aggregator.go:171] initial CRD sync complete...
	I1007 11:05:57.039107       1 autoregister_controller.go:144] Starting autoregister controller
	I1007 11:05:57.039138       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1007 11:05:57.039167       1 cache.go:39] Caches are synced for autoregister controller
	I1007 11:05:57.057530       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1007 11:05:57.098949       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1007 11:05:57.108438       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1007 11:05:57.115988       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1007 11:05:57.116017       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1007 11:05:57.118490       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1007 11:05:57.118801       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1007 11:05:57.125282       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1007 11:05:57.125294       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1007 11:05:57.657994       1 shared_informer.go:320] Caches are synced for configmaps
	F1007 11:06:40.417251       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-apiserver [dfeeb88155731991668fceef59e0033fb998c20414ce49475666e0fa2cf88056] <==
	I1007 11:06:44.196727       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1007 11:06:44.196850       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1007 11:06:44.197973       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I1007 11:06:44.197995       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I1007 11:06:44.558532       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1007 11:06:44.564682       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1007 11:06:44.564709       1 policy_source.go:224] refreshing policies
	I1007 11:06:44.573173       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1007 11:06:44.593969       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1007 11:06:44.594171       1 shared_informer.go:320] Caches are synced for configmaps
	I1007 11:06:44.594277       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1007 11:06:44.594341       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1007 11:06:44.594675       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1007 11:06:44.622046       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1007 11:06:44.624565       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1007 11:06:44.624656       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1007 11:06:44.625720       1 aggregator.go:171] initial CRD sync complete...
	I1007 11:06:44.625812       1 autoregister_controller.go:144] Starting autoregister controller
	I1007 11:06:44.625820       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1007 11:06:44.625829       1 cache.go:39] Caches are synced for autoregister controller
	I1007 11:06:44.676524       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1007 11:06:45.150797       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1007 11:06:45.807453       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.58.2 192.168.58.3]
	I1007 11:06:45.809453       1 controller.go:615] quota admission added evaluator for: endpoints
	I1007 11:06:45.823502       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [20b324bcf7322d4259836e33168ec60ef2bbd460b84f4ffddcb032ed09294f19] <==
	I1007 11:06:14.761456       1 serving.go:386] Generated self-signed cert in-memory
	I1007 11:06:15.419442       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1007 11:06:15.419477       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 11:06:15.421346       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1007 11:06:15.421525       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1007 11:06:15.421754       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1007 11:06:15.421832       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1007 11:06:25.443453       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [d4c2148e2ad7b50c437d1a8547e663bc8a6ee2c96b7cd8b27e6abcf594c304c9] <==
	I1007 11:06:50.958733       1 shared_informer.go:320] Caches are synced for stateful set
	I1007 11:06:50.963214       1 shared_informer.go:320] Caches are synced for PVC protection
	I1007 11:06:50.983658       1 shared_informer.go:320] Caches are synced for persistent volume
	I1007 11:06:50.988066       1 shared_informer.go:320] Caches are synced for expand
	I1007 11:06:50.989266       1 shared_informer.go:320] Caches are synced for attach detach
	I1007 11:06:51.004632       1 shared_informer.go:320] Caches are synced for resource quota
	I1007 11:06:51.025224       1 shared_informer.go:320] Caches are synced for resource quota
	I1007 11:06:51.433947       1 shared_informer.go:320] Caches are synced for garbage collector
	I1007 11:06:51.448637       1 shared_informer.go:320] Caches are synced for garbage collector
	I1007 11:06:51.448670       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1007 11:07:07.260088       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685971-m04"
	I1007 11:07:07.260240       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-685971-m04"
	I1007 11:07:07.283187       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685971-m04"
	I1007 11:07:09.511315       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685971-m04"
	I1007 11:07:10.418856       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.476µs"
	I1007 11:07:11.576653       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.016µs"
	I1007 11:07:12.556096       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="24.09235ms"
	I1007 11:07:12.556293       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="71.999µs"
	I1007 11:07:19.531421       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685971"
	I1007 11:07:19.531581       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-685971-m04"
	I1007 11:07:19.550889       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685971"
	I1007 11:07:19.632232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.451478ms"
	I1007 11:07:19.632468       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.491µs"
	I1007 11:07:20.885705       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685971"
	I1007 11:07:24.740544       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-685971"
	
	
	==> kube-proxy [f62d614be3c8de7c1a743a2598cf3a4fafcf66c6d162b8d807f09ad4747a97c1] <==
	I1007 11:06:13.800452       1 server_linux.go:66] "Using iptables proxy"
	I1007 11:06:14.128754       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.58.2"]
	E1007 11:06:14.133513       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 11:06:14.171493       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1007 11:06:14.171639       1 server_linux.go:169] "Using iptables Proxier"
	I1007 11:06:14.175910       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 11:06:14.176702       1 server.go:483] "Version info" version="v1.31.1"
	I1007 11:06:14.176945       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 11:06:14.178730       1 config.go:199] "Starting service config controller"
	I1007 11:06:14.178842       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 11:06:14.178928       1 config.go:105] "Starting endpoint slice config controller"
	I1007 11:06:14.178978       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 11:06:14.201143       1 config.go:328] "Starting node config controller"
	I1007 11:06:14.201233       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 11:06:14.279354       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 11:06:14.279482       1 shared_informer.go:320] Caches are synced for service config
	I1007 11:06:14.301950       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2396dbc7b7c7808b50cdea85d2a2f2f8aac91c312f2ebc0f83d22403bfdf2b0e] <==
	W1007 11:05:49.568297       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 11:05:49.568339       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:05:50.326706       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 11:05:50.326760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:05:50.688231       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 11:05:50.688299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:05:50.906646       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 11:05:50.906685       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1007 11:05:51.004020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 11:05:51.004069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:05:51.090194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1007 11:05:51.090245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:05:51.573378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1007 11:05:51.573487       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:05:55.814040       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 11:05:55.814082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:05:55.976000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 11:05:55.976046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:05:56.749804       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 11:05:56.749930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:05:56.837674       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 11:05:56.837788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:05:56.948141       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 11:05:56.948304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1007 11:05:59.810654       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 11:06:32 ha-685971 kubelet[762]: E1007 11:06:32.966115     762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-685971_kube-system(828fb3c386c44666eb5ed57778821fd7)\"" pod="kube-system/kube-controller-manager-ha-685971" podUID="828fb3c386c44666eb5ed57778821fd7"
	Oct 07 11:06:37 ha-685971 kubelet[762]: E1007 11:06:37.883535     762 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728299197883357575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:06:37 ha-685971 kubelet[762]: E1007 11:06:37.883606     762 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728299197883357575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:06:41 ha-685971 kubelet[762]: I1007 11:06:41.062491     762 scope.go:117] "RemoveContainer" containerID="ba63d93c3ecc3afb980da72352f9622e99453d3e04db33296277356220166fdd"
	Oct 07 11:06:41 ha-685971 kubelet[762]: I1007 11:06:41.062836     762 status_manager.go:851] "Failed to get status for pod" podUID="3abe2ff69dd30a4ca726eb88136e0695" pod="kube-system/kube-apiserver-ha-685971" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-685971\": dial tcp 192.168.58.254:8443: connect: connection refused"
	Oct 07 11:06:41 ha-685971 kubelet[762]: E1007 11:06:41.064502     762 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-685971.17fc26e7e43b04bf\": dial tcp 192.168.58.254:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-ha-685971.17fc26e7e43b04bf  kube-system   2545 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-685971,UID:3abe2ff69dd30a4ca726eb88136e0695,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.31.1\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-685971,},FirstTimestamp:2024-10-07 11:05:34 +0000 UTC,LastTimestamp:2024-10-07 11:06:41.063715609 +0000 UTC m=+73.380593725,Count:2,Type:Normal,EventTime:0001-01-01 00:00
:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-685971,}"
	Oct 07 11:06:44 ha-685971 kubelet[762]: I1007 11:06:44.071946     762 scope.go:117] "RemoveContainer" containerID="49d9c506c8bad784960eef6ccad629fa0a9ab5a41a5126af60d92815805ba8f3"
	Oct 07 11:06:44 ha-685971 kubelet[762]: E1007 11:06:44.252780     762 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.58.254:59882->192.168.58.254:8443: read: connection reset by peer" logger="UnhandledError"
	Oct 07 11:06:44 ha-685971 kubelet[762]: E1007 11:06:44.252854     762 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.58.254:59892->192.168.58.254:8443: read: connection reset by peer" logger="UnhandledError"
	Oct 07 11:06:44 ha-685971 kubelet[762]: E1007 11:06:44.252890     762 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.58.254:59916->192.168.58.254:8443: read: connection reset by peer" logger="UnhandledError"
	Oct 07 11:06:44 ha-685971 kubelet[762]: E1007 11:06:44.252934     762 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.58.254:59940->192.168.58.254:8443: read: connection reset by peer" logger="UnhandledError"
	Oct 07 11:06:45 ha-685971 kubelet[762]: I1007 11:06:45.091500     762 scope.go:117] "RemoveContainer" containerID="037b4ef0cd47cd4f7e4c6e08e2802ee19798c7c2d6b65829a9c8e6356ae4a7fc"
	Oct 07 11:06:46 ha-685971 kubelet[762]: I1007 11:06:46.843000     762 scope.go:117] "RemoveContainer" containerID="20b324bcf7322d4259836e33168ec60ef2bbd460b84f4ffddcb032ed09294f19"
	Oct 07 11:06:47 ha-685971 kubelet[762]: E1007 11:06:47.885101     762 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728299207884666561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:06:47 ha-685971 kubelet[762]: E1007 11:06:47.885131     762 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728299207884666561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:06:56 ha-685971 kubelet[762]: E1007 11:06:56.345767     762 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-685971?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Oct 07 11:06:57 ha-685971 kubelet[762]: E1007 11:06:57.886355     762 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728299217885934064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:06:57 ha-685971 kubelet[762]: E1007 11:06:57.886409     762 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728299217885934064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:07:06 ha-685971 kubelet[762]: E1007 11:07:06.346604     762 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-685971?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Oct 07 11:07:07 ha-685971 kubelet[762]: E1007 11:07:07.887512     762 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728299227887195489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:07:07 ha-685971 kubelet[762]: E1007 11:07:07.887549     762 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728299227887195489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:07:16 ha-685971 kubelet[762]: E1007 11:07:16.346855     762 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-685971?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
	Oct 07 11:07:17 ha-685971 kubelet[762]: E1007 11:07:17.888591     762 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728299237888405395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:07:17 ha-685971 kubelet[762]: E1007 11:07:17.888639     762 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728299237888405395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:07:26 ha-685971 kubelet[762]: E1007 11:07:26.347327     762 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-685971?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-685971 -n ha-685971
helpers_test.go:261: (dbg) Run:  kubectl --context ha-685971 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (128.50s)

                                                
                                    

Test pass (295/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.9
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 13.35
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.1/json-events 5.19
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 13.35
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.58
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
27 TestAddons/Setup 184.78
31 TestAddons/serial/GCPAuth/Namespaces 0.21
34 TestAddons/parallel/Registry 16.23
36 TestAddons/parallel/InspektorGadget 10.75
39 TestAddons/parallel/CSI 69.32
40 TestAddons/parallel/Headlamp 16.82
41 TestAddons/parallel/CloudSpanner 6.59
42 TestAddons/parallel/LocalPath 11.04
43 TestAddons/parallel/NvidiaDevicePlugin 6.52
44 TestAddons/parallel/Yakd 11.78
45 TestAddons/StoppedEnableDisable 12.23
46 TestCertOptions 38.65
47 TestCertExpiration 248.77
49 TestForceSystemdFlag 42.26
50 TestForceSystemdEnv 44.62
56 TestErrorSpam/setup 31.12
57 TestErrorSpam/start 0.74
58 TestErrorSpam/status 1.12
59 TestErrorSpam/pause 1.77
60 TestErrorSpam/unpause 1.81
61 TestErrorSpam/stop 1.44
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 45.33
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 40.21
68 TestFunctional/serial/KubeContext 0.09
69 TestFunctional/serial/KubectlGetPods 0.1
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.13
73 TestFunctional/serial/CacheCmd/cache/add_local 1.42
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.96
78 TestFunctional/serial/CacheCmd/cache/delete 0.12
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 39.26
82 TestFunctional/serial/ComponentHealth 0.09
83 TestFunctional/serial/LogsCmd 1.7
84 TestFunctional/serial/LogsFileCmd 1.76
85 TestFunctional/serial/InvalidService 4.29
87 TestFunctional/parallel/ConfigCmd 0.48
88 TestFunctional/parallel/DashboardCmd 9.6
89 TestFunctional/parallel/DryRun 0.46
90 TestFunctional/parallel/InternationalLanguage 0.24
91 TestFunctional/parallel/StatusCmd 1.02
95 TestFunctional/parallel/ServiceCmdConnect 13.66
96 TestFunctional/parallel/AddonsCmd 0.21
97 TestFunctional/parallel/PersistentVolumeClaim 27.06
99 TestFunctional/parallel/SSHCmd 0.67
100 TestFunctional/parallel/CpCmd 2.4
102 TestFunctional/parallel/FileSync 0.38
103 TestFunctional/parallel/CertSync 2.14
107 TestFunctional/parallel/NodeLabels 0.11
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
111 TestFunctional/parallel/License 0.29
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.6
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
125 TestFunctional/parallel/ProfileCmd/profile_list 0.43
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
127 TestFunctional/parallel/ServiceCmd/List 0.67
128 TestFunctional/parallel/MountCmd/any-port 9.54
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
131 TestFunctional/parallel/ServiceCmd/Format 0.49
132 TestFunctional/parallel/ServiceCmd/URL 0.43
133 TestFunctional/parallel/MountCmd/specific-port 2.33
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.46
135 TestFunctional/parallel/Version/short 0.07
136 TestFunctional/parallel/Version/components 1.02
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.54
142 TestFunctional/parallel/ImageCommands/Setup 0.64
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.67
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.11
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.31
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.82
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 170.85
160 TestMultiControlPlane/serial/DeployApp 8.88
161 TestMultiControlPlane/serial/PingHostFromPods 1.71
162 TestMultiControlPlane/serial/AddWorkerNode 38.17
163 TestMultiControlPlane/serial/NodeLabels 0.15
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.96
165 TestMultiControlPlane/serial/CopyFile 18.71
166 TestMultiControlPlane/serial/StopSecondaryNode 12.78
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.77
168 TestMultiControlPlane/serial/RestartSecondaryNode 22.01
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.1
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 192.31
171 TestMultiControlPlane/serial/DeleteSecondaryNode 12.44
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.72
173 TestMultiControlPlane/serial/StopCluster 35.92
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.74
176 TestMultiControlPlane/serial/AddSecondaryNode 67.59
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.98
181 TestJSONOutput/start/Command 78.58
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.75
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.66
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.86
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.22
206 TestKicCustomNetwork/create_custom_network 36.42
207 TestKicCustomNetwork/use_default_bridge_network 34.14
208 TestKicExistingNetwork 34.32
209 TestKicCustomSubnet 31.17
210 TestKicStaticIP 33.52
211 TestMainNoArgs 0.07
212 TestMinikubeProfile 67.15
215 TestMountStart/serial/StartWithMountFirst 6.9
216 TestMountStart/serial/VerifyMountFirst 0.26
217 TestMountStart/serial/StartWithMountSecond 9.27
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.61
220 TestMountStart/serial/VerifyMountPostDelete 0.25
221 TestMountStart/serial/Stop 1.22
222 TestMountStart/serial/RestartStopped 7.62
223 TestMountStart/serial/VerifyMountPostStop 0.28
226 TestMultiNode/serial/FreshStart2Nodes 104.47
227 TestMultiNode/serial/DeployApp2Nodes 6.71
228 TestMultiNode/serial/PingHostFrom2Pods 0.99
229 TestMultiNode/serial/AddNode 57.88
230 TestMultiNode/serial/MultiNodeLabels 0.1
231 TestMultiNode/serial/ProfileList 0.68
232 TestMultiNode/serial/CopyFile 10.14
233 TestMultiNode/serial/StopNode 2.26
234 TestMultiNode/serial/StartAfterStop 10.26
235 TestMultiNode/serial/RestartKeepsNodes 96.08
236 TestMultiNode/serial/DeleteNode 5.45
237 TestMultiNode/serial/StopMultiNode 23.91
238 TestMultiNode/serial/RestartMultiNode 54.82
239 TestMultiNode/serial/ValidateNameConflict 31.89
244 TestPreload 123.8
246 TestScheduledStopUnix 108.76
249 TestInsufficientStorage 10.35
250 TestRunningBinaryUpgrade 107.72
252 TestKubernetesUpgrade 389.44
253 TestMissingContainerUpgrade 161.89
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 39.23
257 TestNoKubernetes/serial/StartWithStopK8s 7.59
258 TestNoKubernetes/serial/Start 7.27
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
260 TestNoKubernetes/serial/ProfileList 1.2
261 TestNoKubernetes/serial/Stop 1.26
262 TestNoKubernetes/serial/StartNoArgs 7.96
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
264 TestStoppedBinaryUpgrade/Setup 0.6
265 TestStoppedBinaryUpgrade/Upgrade 84.18
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.15
275 TestPause/serial/Start 78.53
276 TestPause/serial/SecondStartNoReconfiguration 22.89
277 TestPause/serial/Pause 0.99
278 TestPause/serial/VerifyStatus 0.4
279 TestPause/serial/Unpause 0.8
280 TestPause/serial/PauseAgain 0.98
281 TestPause/serial/DeletePaused 3.19
282 TestPause/serial/VerifyDeletedResources 0.38
290 TestNetworkPlugins/group/false 4.74
295 TestStartStop/group/old-k8s-version/serial/FirstStart 147.59
296 TestStartStop/group/old-k8s-version/serial/DeployApp 11.55
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.35
298 TestStartStop/group/old-k8s-version/serial/Stop 12.49
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
300 TestStartStop/group/old-k8s-version/serial/SecondStart 34.28
302 TestStartStop/group/embed-certs/serial/FirstStart 85.64
303 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 30.02
304 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.1
305 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
306 TestStartStop/group/old-k8s-version/serial/Pause 2.89
308 TestStartStop/group/no-preload/serial/FirstStart 64.31
309 TestStartStop/group/embed-certs/serial/DeployApp 10.42
310 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.41
311 TestStartStop/group/embed-certs/serial/Stop 12.12
312 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
313 TestStartStop/group/embed-certs/serial/SecondStart 290.12
314 TestStartStop/group/no-preload/serial/DeployApp 11.4
315 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.17
316 TestStartStop/group/no-preload/serial/Stop 11.98
317 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.26
318 TestStartStop/group/no-preload/serial/SecondStart 267.7
319 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
321 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
322 TestStartStop/group/embed-certs/serial/Pause 4.06
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 80.52
325 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
326 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.16
327 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
328 TestStartStop/group/no-preload/serial/Pause 4.12
330 TestStartStop/group/newest-cni/serial/FirstStart 38.52
331 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.99
333 TestStartStop/group/newest-cni/serial/Stop 1.21
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
335 TestStartStop/group/newest-cni/serial/SecondStart 17.52
336 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.36
337 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
340 TestStartStop/group/newest-cni/serial/Pause 3.16
341 TestNetworkPlugins/group/auto/Start 81.38
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.9
343 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.23
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
345 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 271.8
346 TestNetworkPlugins/group/auto/KubeletFlags 0.35
347 TestNetworkPlugins/group/auto/NetCatPod 11.3
348 TestNetworkPlugins/group/auto/DNS 0.18
349 TestNetworkPlugins/group/auto/Localhost 0.16
350 TestNetworkPlugins/group/auto/HairPin 0.17
351 TestNetworkPlugins/group/kindnet/Start 78.39
352 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
353 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
354 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
355 TestNetworkPlugins/group/kindnet/DNS 0.19
356 TestNetworkPlugins/group/kindnet/Localhost 0.15
357 TestNetworkPlugins/group/kindnet/HairPin 0.16
358 TestNetworkPlugins/group/calico/Start 65.47
359 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
360 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
361 TestNetworkPlugins/group/calico/ControllerPod 6.01
362 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
363 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.84
364 TestNetworkPlugins/group/calico/KubeletFlags 0.39
365 TestNetworkPlugins/group/calico/NetCatPod 12.3
366 TestNetworkPlugins/group/custom-flannel/Start 59.24
367 TestNetworkPlugins/group/calico/DNS 0.31
368 TestNetworkPlugins/group/calico/Localhost 0.21
369 TestNetworkPlugins/group/calico/HairPin 0.28
370 TestNetworkPlugins/group/enable-default-cni/Start 80.75
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.51
373 TestNetworkPlugins/group/custom-flannel/DNS 0.29
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.24
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
376 TestNetworkPlugins/group/flannel/Start 56.42
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.36
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
382 TestNetworkPlugins/group/bridge/Start 41.71
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
385 TestNetworkPlugins/group/flannel/NetCatPod 13.31
386 TestNetworkPlugins/group/flannel/DNS 0.19
387 TestNetworkPlugins/group/flannel/Localhost 0.22
388 TestNetworkPlugins/group/flannel/HairPin 0.21
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.43
390 TestNetworkPlugins/group/bridge/NetCatPod 10.38
391 TestNetworkPlugins/group/bridge/DNS 0.17
392 TestNetworkPlugins/group/bridge/Localhost 0.14
393 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (6.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-490247 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-490247 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.899794051s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1007 10:33:51.967719  896726 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1007 10:33:51.967800  896726 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-891319/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-490247
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-490247: exit status 85 (79.542813ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-490247 | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC |          |
	|         | -p download-only-490247        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 10:33:45
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 10:33:45.166551  896731 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:33:45.166815  896731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:33:45.168846  896731 out.go:358] Setting ErrFile to fd 2...
	I1007 10:33:45.169318  896731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:33:45.169793  896731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-891319/.minikube/bin
	W1007 10:33:45.170334  896731 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19761-891319/.minikube/config/config.json: open /home/jenkins/minikube-integration/19761-891319/.minikube/config/config.json: no such file or directory
	I1007 10:33:45.171963  896731 out.go:352] Setting JSON to true
	I1007 10:33:45.175756  896731 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22570,"bootTime":1728274656,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1007 10:33:45.175978  896731 start.go:139] virtualization:  
	I1007 10:33:45.181094  896731 out.go:97] [download-only-490247] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1007 10:33:45.181367  896731 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19761-891319/.minikube/cache/preloaded-tarball: no such file or directory
	I1007 10:33:45.181433  896731 notify.go:220] Checking for updates...
	I1007 10:33:45.183781  896731 out.go:169] MINIKUBE_LOCATION=19761
	I1007 10:33:45.186835  896731 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:33:45.188700  896731 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19761-891319/kubeconfig
	I1007 10:33:45.190794  896731 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-891319/.minikube
	I1007 10:33:45.192926  896731 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1007 10:33:45.196999  896731 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1007 10:33:45.197432  896731 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:33:45.254826  896731 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 10:33:45.254994  896731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 10:33:45.354806  896731 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:62 SystemTime:2024-10-07 10:33:45.339959106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 10:33:45.354938  896731 docker.go:318] overlay module found
	I1007 10:33:45.357130  896731 out.go:97] Using the docker driver based on user configuration
	I1007 10:33:45.357171  896731 start.go:297] selected driver: docker
	I1007 10:33:45.357179  896731 start.go:901] validating driver "docker" against <nil>
	I1007 10:33:45.357304  896731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 10:33:45.423984  896731 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:62 SystemTime:2024-10-07 10:33:45.413007928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 10:33:45.424337  896731 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 10:33:45.424647  896731 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1007 10:33:45.424817  896731 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 10:33:45.427014  896731 out.go:169] Using Docker driver with root privileges
	I1007 10:33:45.429084  896731 cni.go:84] Creating CNI manager for ""
	I1007 10:33:45.429152  896731 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 10:33:45.429164  896731 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 10:33:45.429267  896731 start.go:340] cluster config:
	{Name:download-only-490247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-490247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:33:45.431450  896731 out.go:97] Starting "download-only-490247" primary control-plane node in "download-only-490247" cluster
	I1007 10:33:45.431494  896731 cache.go:121] Beginning downloading kic base image for docker with crio
	I1007 10:33:45.433532  896731 out.go:97] Pulling base image v0.0.45-1727731891-master ...
	I1007 10:33:45.433564  896731 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 10:33:45.433687  896731 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 10:33:45.455801  896731 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1007 10:33:45.455832  896731 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1007 10:33:45.456005  896731 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1007 10:33:45.456105  896731 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1007 10:33:45.491995  896731 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1007 10:33:45.492065  896731 cache.go:56] Caching tarball of preloaded images
	I1007 10:33:45.492267  896731 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 10:33:45.494885  896731 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1007 10:33:45.494913  896731 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1007 10:33:45.575796  896731 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19761-891319/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1007 10:33:49.338509  896731 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1007 10:33:49.338639  896731 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19761-891319/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-490247 host does not exist
	  To start a cluster, run: "minikube start -p download-only-490247"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (13.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-linux-arm64 delete --all: (13.349204442s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (13.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-490247
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-777537 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-777537 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.189454979s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1007 10:34:10.727573  896726 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1007 10:34:10.727610  896726 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19761-891319/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-777537
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-777537: exit status 85 (72.360927ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-490247 | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC |                     |
	|         | -p download-only-490247        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 07 Oct 24 10:33 UTC | 07 Oct 24 10:34 UTC |
	| delete  | -p download-only-490247        | download-only-490247 | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC | 07 Oct 24 10:34 UTC |
	| start   | -o=json --download-only        | download-only-777537 | jenkins | v1.34.0 | 07 Oct 24 10:34 UTC |                     |
	|         | -p download-only-777537        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 10:34:05
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 10:34:05.585948  896991 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:34:05.586082  896991 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:34:05.586091  896991 out.go:358] Setting ErrFile to fd 2...
	I1007 10:34:05.586097  896991 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:34:05.586345  896991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-891319/.minikube/bin
	I1007 10:34:05.586753  896991 out.go:352] Setting JSON to true
	I1007 10:34:05.587648  896991 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22590,"bootTime":1728274656,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1007 10:34:05.587719  896991 start.go:139] virtualization:  
	I1007 10:34:05.590213  896991 out.go:97] [download-only-777537] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 10:34:05.590510  896991 notify.go:220] Checking for updates...
	I1007 10:34:05.593152  896991 out.go:169] MINIKUBE_LOCATION=19761
	I1007 10:34:05.595194  896991 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:34:05.596909  896991 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19761-891319/kubeconfig
	I1007 10:34:05.599041  896991 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-891319/.minikube
	I1007 10:34:05.600778  896991 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1007 10:34:05.604179  896991 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1007 10:34:05.604494  896991 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:34:05.632349  896991 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 10:34:05.632476  896991 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 10:34:05.684241  896991 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2024-10-07 10:34:05.674797179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 10:34:05.684378  896991 docker.go:318] overlay module found
	I1007 10:34:05.686366  896991 out.go:97] Using the docker driver based on user configuration
	I1007 10:34:05.686390  896991 start.go:297] selected driver: docker
	I1007 10:34:05.686397  896991 start.go:901] validating driver "docker" against <nil>
	I1007 10:34:05.686512  896991 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 10:34:05.740566  896991 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2024-10-07 10:34:05.727155632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 10:34:05.740831  896991 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 10:34:05.741116  896991 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1007 10:34:05.741293  896991 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 10:34:05.743476  896991 out.go:169] Using Docker driver with root privileges
	I1007 10:34:05.745080  896991 cni.go:84] Creating CNI manager for ""
	I1007 10:34:05.745140  896991 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1007 10:34:05.745154  896991 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 10:34:05.745244  896991 start.go:340] cluster config:
	{Name:download-only-777537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-777537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:34:05.747218  896991 out.go:97] Starting "download-only-777537" primary control-plane node in "download-only-777537" cluster
	I1007 10:34:05.747241  896991 cache.go:121] Beginning downloading kic base image for docker with crio
	I1007 10:34:05.749020  896991 out.go:97] Pulling base image v0.0.45-1727731891-master ...
	I1007 10:34:05.749051  896991 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:34:05.749156  896991 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 10:34:05.767427  896991 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1007 10:34:05.767451  896991 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1007 10:34:05.767544  896991 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1007 10:34:05.767567  896991 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1007 10:34:05.767574  896991 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1007 10:34:05.767582  896991 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1007 10:34:05.804916  896991 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1007 10:34:05.804955  896991 cache.go:56] Caching tarball of preloaded images
	I1007 10:34:05.805589  896991 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:34:05.807658  896991 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1007 10:34:05.807692  896991 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I1007 10:34:05.893117  896991 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:8285fc512c7462f100de137f91fcd0ae -> /home/jenkins/minikube-integration/19761-891319/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I1007 10:34:09.283203  896991 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I1007 10:34:09.283332  896991 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19761-891319/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I1007 10:34:10.140867  896991 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 10:34:10.141304  896991 profile.go:143] Saving config to /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/download-only-777537/config.json ...
	I1007 10:34:10.141343  896991 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/download-only-777537/config.json: {Name:mk3dec62869a5fd196aa06f9d7062c15523bc279 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 10:34:10.142643  896991 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 10:34:10.142828  896991 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19761-891319/.minikube/cache/linux/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-777537 host does not exist
	  To start a cluster, run: "minikube start -p download-only-777537"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (13.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-linux-arm64 delete --all: (13.353019986s)
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (13.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-777537
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
I1007 10:34:25.132701  896726 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-858793 --alsologtostderr --binary-mirror http://127.0.0.1:33319 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-858793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-858793
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:934: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-952725
addons_test.go:934: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-952725: exit status 85 (86.327456ms)

                                                
                                                
-- stdout --
	* Profile "addons-952725" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-952725"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:945: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-952725
addons_test.go:945: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-952725: exit status 85 (97.838239ms)

                                                
                                                
-- stdout --
	* Profile "addons-952725" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-952725"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (184.78s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-952725 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-952725 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m4.780278085s)
--- PASS: TestAddons/Setup (184.78s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-952725 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-952725 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 8.127328ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-ckskn" [d430f6b1-cd25-4f9f-aa81-282aa63589cf] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004316377s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lxfj2" [0a453af2-0cb5-4656-8c53-e0132b6c6cfc] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003476537s
addons_test.go:331: (dbg) Run:  kubectl --context addons-952725 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-952725 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-952725 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.244069104s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-952725 ip
2024/10/07 10:45:58 [DEBUG] GET http://192.168.58.2:5000
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-952725 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.23s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.75s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6qnjm" [7fa61fc4-7888-4d86-9a5b-556392ccc483] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004230961s
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-952725 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-952725 addons disable inspektor-gadget --alsologtostderr -v=1: (5.74720746s)
--- PASS: TestAddons/parallel/InspektorGadget (10.75s)

                                                
                                    
x
+
TestAddons/parallel/CSI (69.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1007 10:45:54.794323  896726 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1007 10:45:54.800705  896726 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1007 10:45:54.800736  896726 kapi.go:107] duration metric: took 6.426515ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 6.437862ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-952725 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-952725 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2def9983-d69f-41e8-bb2c-28f53d789053] Pending
helpers_test.go:344: "task-pv-pod" [2def9983-d69f-41e8-bb2c-28f53d789053] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2def9983-d69f-41e8-bb2c-28f53d789053] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004578684s
addons_test.go:511: (dbg) Run:  kubectl --context addons-952725 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-952725 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-952725 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-952725 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-952725 delete pod task-pv-pod: (1.28278883s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-952725 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-952725 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-952725 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a9643c8f-a574-42d5-be98-3862177a9fd9] Pending
helpers_test.go:344: "task-pv-pod-restore" [a9643c8f-a574-42d5-be98-3862177a9fd9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a9643c8f-a574-42d5-be98-3862177a9fd9] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.0038388s
addons_test.go:553: (dbg) Run:  kubectl --context addons-952725 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-952725 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-952725 delete volumesnapshot new-snapshot-demo
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-952725 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-952725 addons disable volumesnapshots --alsologtostderr -v=1: (1.023280379s)
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-952725 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-952725 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.928362162s)
--- PASS: TestAddons/parallel/CSI (69.32s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-952725 --alsologtostderr -v=1
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-98nwg" [bf2701cd-2d9f-42ea-adad-125ccf24ac7f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-98nwg" [bf2701cd-2d9f-42ea-adad-125ccf24ac7f] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.00418988s
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-952725 addons disable headlamp --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-952725 addons disable headlamp --alsologtostderr -v=1: (5.836781119s)
--- PASS: TestAddons/parallel/Headlamp (16.82s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-xchzw" [385912f1-3665-4224-aefd-9efebacbcbef] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00333428s
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-952725 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.04s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:883: (dbg) Run:  kubectl --context addons-952725 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:889: (dbg) Run:  kubectl --context addons-952725 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:893: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-952725 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:896: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [518cc717-d4ef-4199-92e6-a3b3e066d89a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [518cc717-d4ef-4199-92e6-a3b3e066d89a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [518cc717-d4ef-4199-92e6-a3b3e066d89a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:896: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003539863s
addons_test.go:901: (dbg) Run:  kubectl --context addons-952725 get pvc test-pvc -o=json
addons_test.go:910: (dbg) Run:  out/minikube-linux-arm64 -p addons-952725 ssh "cat /opt/local-path-provisioner/pvc-3e4c90e6-24f8-4bf4-8b84-84508c280cb4_default_test-pvc/file1"
addons_test.go:922: (dbg) Run:  kubectl --context addons-952725 delete pod test-local-path
addons_test.go:926: (dbg) Run:  kubectl --context addons-952725 delete pvc test-pvc
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-952725 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (11.04s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:958: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fmzt5" [a9637f8d-dccb-461d-97b2-a4f5108a27d6] Running
addons_test.go:958: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004029781s
addons_test.go:961: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-952725
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-hxbps" [5e98e7f3-6764-4926-9b90-9aa9df0a0c81] Running
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003294512s
addons_test.go:973: (dbg) Run:  out/minikube-linux-arm64 -p addons-952725 addons disable yakd --alsologtostderr -v=1
addons_test.go:973: (dbg) Done: out/minikube-linux-arm64 -p addons-952725 addons disable yakd --alsologtostderr -v=1: (5.772215195s)
--- PASS: TestAddons/parallel/Yakd (11.78s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.23s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-952725
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-952725: (11.949356534s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-952725
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-952725
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-952725
--- PASS: TestAddons/StoppedEnableDisable (12.23s)

                                                
                                    
x
+
TestCertOptions (38.65s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-398590 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-398590 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.607610398s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-398590 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-398590 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-398590 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-398590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-398590
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-398590: (2.008678713s)
--- PASS: TestCertOptions (38.65s)

                                                
                                    
x
+
TestCertExpiration (248.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-520538 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-520538 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (37.187866113s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-520538 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-520538 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (29.158499291s)
helpers_test.go:175: Cleaning up "cert-expiration-520538" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-520538
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-520538: (2.419580423s)
--- PASS: TestCertExpiration (248.77s)

                                                
                                    
x
+
TestForceSystemdFlag (42.26s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-970103 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-970103 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.373277482s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-970103 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-970103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-970103
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-970103: (2.501325688s)
--- PASS: TestForceSystemdFlag (42.26s)

                                                
                                    
x
+
TestForceSystemdEnv (44.62s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-049027 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1007 11:33:47.890226  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-049027 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.999180844s)
helpers_test.go:175: Cleaning up "force-systemd-env-049027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-049027
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-049027: (2.620910251s)
--- PASS: TestForceSystemdEnv (44.62s)

                                                
                                    
x
+
TestErrorSpam/setup (31.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-109472 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-109472 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-109472 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-109472 --driver=docker  --container-runtime=crio: (31.120103495s)
--- PASS: TestErrorSpam/setup (31.12s)

                                                
                                    
x
+
TestErrorSpam/start (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109472 --log_dir /tmp/nospam-109472 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109472 --log_dir /tmp/nospam-109472 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109472 --log_dir /tmp/nospam-109472 start --dry-run
--- PASS: TestErrorSpam/start (0.74s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109472 --log_dir /tmp/nospam-109472 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109472 --log_dir /tmp/nospam-109472 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109472 --log_dir /tmp/nospam-109472 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109472 --log_dir /tmp/nospam-109472 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109472 --log_dir /tmp/nospam-109472 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109472 --log_dir /tmp/nospam-109472 pause
--- PASS: TestErrorSpam/pause (1.77s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109472 --log_dir /tmp/nospam-109472 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109472 --log_dir /tmp/nospam-109472 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109472 --log_dir /tmp/nospam-109472 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109472 --log_dir /tmp/nospam-109472 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-109472 --log_dir /tmp/nospam-109472 stop: (1.255023174s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109472 --log_dir /tmp/nospam-109472 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-109472 --log_dir /tmp/nospam-109472 stop
--- PASS: TestErrorSpam/stop (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19761-891319/.minikube/files/etc/test/nested/copy/896726/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.33s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-721395 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-721395 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (45.331148468s)
--- PASS: TestFunctional/serial/StartWithProxy (45.33s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.21s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1007 10:54:07.224440  896726 config.go:182] Loaded profile config "functional-721395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-721395 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-721395 --alsologtostderr -v=8: (40.20700331s)
functional_test.go:663: soft start took 40.207557358s for "functional-721395" cluster.
I1007 10:54:47.431785  896726 config.go:182] Loaded profile config "functional-721395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (40.21s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.09s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-721395 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-721395 cache add registry.k8s.io/pause:3.1: (1.414483159s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-721395 cache add registry.k8s.io/pause:3.3: (1.42664171s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-721395 cache add registry.k8s.io/pause:latest: (1.285082266s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-721395 /tmp/TestFunctionalserialCacheCmdcacheadd_local13863489/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 cache add minikube-local-cache-test:functional-721395
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 cache delete minikube-local-cache-test:functional-721395
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-721395
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-721395 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (308.468435ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-721395 cache reload: (1.045444706s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 kubectl -- --context functional-721395 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-721395 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-721395 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-721395 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.261944098s)
functional_test.go:761: restart took 39.262051158s for "functional-721395" cluster.
I1007 10:55:35.211762  896726 config.go:182] Loaded profile config "functional-721395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (39.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-721395 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-721395 logs: (1.699018484s)
--- PASS: TestFunctional/serial/LogsCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 logs --file /tmp/TestFunctionalserialLogsFileCmd1566862072/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-721395 logs --file /tmp/TestFunctionalserialLogsFileCmd1566862072/001/logs.txt: (1.761160576s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.29s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-721395 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-721395
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-721395: exit status 115 (717.580481ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.58.2:32313 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-721395 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-721395 config get cpus: exit status 14 (75.524551ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-721395 config get cpus: exit status 14 (91.225248ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-721395 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-721395 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 930024: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.60s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-721395 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-721395 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (191.737091ms)

                                                
                                                
-- stdout --
	* [functional-721395] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19761-891319/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-891319/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 10:56:17.701214  929733 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:56:17.701381  929733 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:56:17.701429  929733 out.go:358] Setting ErrFile to fd 2...
	I1007 10:56:17.701448  929733 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:56:17.701898  929733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-891319/.minikube/bin
	I1007 10:56:17.702364  929733 out.go:352] Setting JSON to false
	I1007 10:56:17.703674  929733 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23922,"bootTime":1728274656,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1007 10:56:17.703813  929733 start.go:139] virtualization:  
	I1007 10:56:17.706282  929733 out.go:177] * [functional-721395] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 10:56:17.708995  929733 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 10:56:17.709037  929733 notify.go:220] Checking for updates...
	I1007 10:56:17.711416  929733 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:56:17.713920  929733 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-891319/kubeconfig
	I1007 10:56:17.716415  929733 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-891319/.minikube
	I1007 10:56:17.718684  929733 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 10:56:17.720904  929733 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 10:56:17.723698  929733 config.go:182] Loaded profile config "functional-721395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:56:17.724313  929733 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:56:17.752404  929733 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 10:56:17.752608  929733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 10:56:17.821283  929733 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:62 SystemTime:2024-10-07 10:56:17.807490359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 10:56:17.821412  929733 docker.go:318] overlay module found
	I1007 10:56:17.824140  929733 out.go:177] * Using the docker driver based on existing profile
	I1007 10:56:17.826598  929733 start.go:297] selected driver: docker
	I1007 10:56:17.826624  929733 start.go:901] validating driver "docker" against &{Name:functional-721395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-721395 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:56:17.826760  929733 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 10:56:17.830020  929733 out.go:201] 
	W1007 10:56:17.832688  929733 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1007 10:56:17.835167  929733 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-721395 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-721395 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-721395 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (238.982065ms)

                                                
                                                
-- stdout --
	* [functional-721395] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19761-891319/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-891319/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 10:56:17.476905  929688 out.go:345] Setting OutFile to fd 1 ...
	I1007 10:56:17.477112  929688 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:56:17.477122  929688 out.go:358] Setting ErrFile to fd 2...
	I1007 10:56:17.477128  929688 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 10:56:17.477487  929688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-891319/.minikube/bin
	I1007 10:56:17.477914  929688 out.go:352] Setting JSON to false
	I1007 10:56:17.478945  929688 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":23922,"bootTime":1728274656,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1007 10:56:17.479047  929688 start.go:139] virtualization:  
	I1007 10:56:17.486622  929688 out.go:177] * [functional-721395] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1007 10:56:17.490109  929688 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 10:56:17.490286  929688 notify.go:220] Checking for updates...
	I1007 10:56:17.495592  929688 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 10:56:17.498272  929688 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-891319/kubeconfig
	I1007 10:56:17.500879  929688 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-891319/.minikube
	I1007 10:56:17.503550  929688 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 10:56:17.507711  929688 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 10:56:17.510904  929688 config.go:182] Loaded profile config "functional-721395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 10:56:17.511515  929688 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 10:56:17.565980  929688 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 10:56:17.566168  929688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 10:56:17.622228  929688 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:62 SystemTime:2024-10-07 10:56:17.611528517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 10:56:17.622366  929688 docker.go:318] overlay module found
	I1007 10:56:17.625323  929688 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1007 10:56:17.636015  929688 start.go:297] selected driver: docker
	I1007 10:56:17.636038  929688 start.go:901] validating driver "docker" against &{Name:functional-721395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-721395 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 10:56:17.636161  929688 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 10:56:17.640621  929688 out.go:201] 
	W1007 10:56:17.642208  929688 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1007 10:56:17.643991  929688 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-721395 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-721395 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-zlmwv" [468c5755-400d-4745-9a41-944d5e18ecfb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-zlmwv" [468c5755-400d-4745-9a41-944d5e18ecfb] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.003735535s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.58.2:30360
functional_test.go:1675: http://192.168.58.2:30360: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-zlmwv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.58.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.58.2:30360
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [592bf8b5-5b69-46a5-89e2-051d2810ef0a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003734777s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-721395 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-721395 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-721395 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-721395 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4a60c7d9-e1fd-492e-a294-2032f96e01bf] Pending
helpers_test.go:344: "sp-pod" [4a60c7d9-e1fd-492e-a294-2032f96e01bf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4a60c7d9-e1fd-492e-a294-2032f96e01bf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.006613686s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-721395 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-721395 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-721395 delete -f testdata/storage-provisioner/pod.yaml: (1.056053667s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-721395 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0d2cf111-e998-4bad-8ba8-67bb9abc006c] Pending
helpers_test.go:344: "sp-pod" [0d2cf111-e998-4bad-8ba8-67bb9abc006c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003769312s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-721395 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh -n functional-721395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 cp functional-721395:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1190304073/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh -n functional-721395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh -n functional-721395 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/896726/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "sudo cat /etc/test/nested/copy/896726/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/896726.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "sudo cat /etc/ssl/certs/896726.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/896726.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "sudo cat /usr/share/ca-certificates/896726.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/8967262.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "sudo cat /etc/ssl/certs/8967262.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/8967262.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "sudo cat /usr/share/ca-certificates/8967262.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-721395 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-721395 ssh "sudo systemctl is-active docker": exit status 1 (296.742436ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-721395 ssh "sudo systemctl is-active containerd": exit status 1 (291.283312ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-721395 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-721395 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-721395 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 927653: os: process already finished
helpers_test.go:502: unable to terminate pid 927464: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-721395 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-721395 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-721395 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ae89bd3a-002f-476f-bcee-4854bff05f94] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ae89bd3a-002f-476f-bcee-4854bff05f94] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003674349s
I1007 10:55:53.423152  896726 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-721395 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.72.30 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-721395 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-721395 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-721395 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-fkr7h" [6c6356a4-5658-4e63-8b04-823c559ed65d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-fkr7h" [6c6356a4-5658-4e63-8b04-823c559ed65d] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003985277s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "369.178768ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "60.857849ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "355.836709ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "65.666942ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-721395 /tmp/TestFunctionalparallelMountCmdany-port420129615/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728298573796843985" to /tmp/TestFunctionalparallelMountCmdany-port420129615/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728298573796843985" to /tmp/TestFunctionalparallelMountCmdany-port420129615/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728298573796843985" to /tmp/TestFunctionalparallelMountCmdany-port420129615/001/test-1728298573796843985
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-721395 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (430.735091ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1007 10:56:14.228503  896726 retry.go:31] will retry after 486.276504ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  7 10:56 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  7 10:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  7 10:56 test-1728298573796843985
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh cat /mount-9p/test-1728298573796843985
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-721395 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [58235903-66de-41bd-9a2e-1632f6d72e2c] Pending
helpers_test.go:344: "busybox-mount" [58235903-66de-41bd-9a2e-1632f6d72e2c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [58235903-66de-41bd-9a2e-1632f6d72e2c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [58235903-66de-41bd-9a2e-1632f6d72e2c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004667637s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-721395 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-721395 /tmp/TestFunctionalparallelMountCmdany-port420129615/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 service list -o json
functional_test.go:1494: Took "498.236712ms" to run "out/minikube-linux-arm64 -p functional-721395 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.58.2:30850
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.58.2:30850
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-721395 /tmp/TestFunctionalparallelMountCmdspecific-port3293246891/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-721395 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (547.029847ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1007 10:56:23.887251  896726 retry.go:31] will retry after 479.874365ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-721395 /tmp/TestFunctionalparallelMountCmdspecific-port3293246891/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-721395 ssh "sudo umount -f /mount-9p": exit status 1 (346.369351ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-721395 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-721395 /tmp/TestFunctionalparallelMountCmdspecific-port3293246891/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-721395 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2215436869/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-721395 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2215436869/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-721395 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2215436869/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-721395 ssh "findmnt -T" /mount1: exit status 1 (830.196979ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1007 10:56:26.500110  896726 retry.go:31] will retry after 659.524085ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "findmnt -T" /mount2
2024/10/07 10:56:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-721395 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-721395 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2215436869/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-721395 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2215436869/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-721395 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2215436869/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-721395 version -o=json --components: (1.016871681s)
--- PASS: TestFunctional/parallel/Version/components (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-721395 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-721395
localhost/kicbase/echo-server:functional-721395
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-721395 image ls --format short --alsologtostderr:
I1007 10:56:35.445577  932554 out.go:345] Setting OutFile to fd 1 ...
I1007 10:56:35.445911  932554 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 10:56:35.445938  932554 out.go:358] Setting ErrFile to fd 2...
I1007 10:56:35.445960  932554 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 10:56:35.446237  932554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-891319/.minikube/bin
I1007 10:56:35.447025  932554 config.go:182] Loaded profile config "functional-721395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 10:56:35.447201  932554 config.go:182] Loaded profile config "functional-721395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 10:56:35.447724  932554 cli_runner.go:164] Run: docker container inspect functional-721395 --format={{.State.Status}}
I1007 10:56:35.471808  932554 ssh_runner.go:195] Run: systemctl --version
I1007 10:56:35.471864  932554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-721395
I1007 10:56:35.499284  932554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/functional-721395/id_rsa Username:docker}
I1007 10:56:35.597982  932554 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-721395 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/library/nginx                 | alpine             | 577a23b5858b9 | 52.3MB |
| docker.io/library/nginx                 | latest             | 048e090385966 | 201MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/kicbase/echo-server           | functional-721395  | ce2d2cda2d858 | 4.79MB |
| localhost/minikube-local-cache-test     | functional-721395  | 95fa2d456df40 | 3.33kB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 279f381cb3736 | 86.9MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 24a140c548c07 | 96MB   |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 7f8aa378bb47d | 67MB   |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/kube-apiserver          | v1.31.1            | d3f53a98c0a9d | 92.6MB |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-721395 image ls --format table --alsologtostderr:
I1007 10:56:36.052764  932706 out.go:345] Setting OutFile to fd 1 ...
I1007 10:56:36.052987  932706 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 10:56:36.053015  932706 out.go:358] Setting ErrFile to fd 2...
I1007 10:56:36.053037  932706 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 10:56:36.053373  932706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-891319/.minikube/bin
I1007 10:56:36.054124  932706 config.go:182] Loaded profile config "functional-721395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 10:56:36.054318  932706 config.go:182] Loaded profile config "functional-721395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 10:56:36.054850  932706 cli_runner.go:164] Run: docker container inspect functional-721395 --format={{.State.Status}}
I1007 10:56:36.077473  932706 ssh_runner.go:195] Run: systemctl --version
I1007 10:56:36.077532  932706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-721395
I1007 10:56:36.105680  932706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/functional-721395/id_rsa Username:docker}
I1007 10:56:36.204758  932706 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-721395 image ls --format json --alsologtostderr:
[{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690","registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67007814"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951f
bcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ff
b600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb","registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"92632544"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"86930758"},{"id":"24a140c548c075e48
7e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"95951255"},{"id":"048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9","repoDigests":["docker.io/library/nginx@sha256:96c43ba316370e0c1d1810b9693e647cc62a172a842d77888c299f3944922491","docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"200984127"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.2
8.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"95fa2d456df40912322321a0f3238e5a1d6270493b78a99588fb6c9e8cc66ed3","repoDigests":["localhost/minikube-local-cache-test@sha256:94a10d53044a32452eb39093705cd92ce842824c05e5917aed255107386db173"],"repoTags":["localhost/minikube-local-cache-test:functional-721395"],"size":"3330"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b
6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":["docker.io/library/nginx@sha256:2140dad
235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250","docker.io/library/nginx@sha256:d1f949a77b81762af560a6e8f3f2bc2817f1c575ede5a756749e3c5d459e6478"],"repoTags":["docker.io/library/nginx:alpine"],"size":"52254450"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-721395"],"size":"4788229"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-721395 image ls --format json --alsologtostderr:
I1007 10:56:35.743611  932622 out.go:345] Setting OutFile to fd 1 ...
I1007 10:56:35.746092  932622 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 10:56:35.746111  932622 out.go:358] Setting ErrFile to fd 2...
I1007 10:56:35.746120  932622 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 10:56:35.746396  932622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-891319/.minikube/bin
I1007 10:56:35.747050  932622 config.go:182] Loaded profile config "functional-721395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 10:56:35.747168  932622 config.go:182] Loaded profile config "functional-721395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 10:56:35.747650  932622 cli_runner.go:164] Run: docker container inspect functional-721395 --format={{.State.Status}}
I1007 10:56:35.769254  932622 ssh_runner.go:195] Run: systemctl --version
I1007 10:56:35.769304  932622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-721395
I1007 10:56:35.805107  932622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/functional-721395/id_rsa Username:docker}
I1007 10:56:35.904691  932622 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-721395 image ls --format yaml --alsologtostderr:
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "95951255"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests:
- docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
- docker.io/library/nginx@sha256:d1f949a77b81762af560a6e8f3f2bc2817f1c575ede5a756749e3c5d459e6478
repoTags:
- docker.io/library/nginx:alpine
size: "52254450"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "86930758"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67007814"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9
repoDigests:
- docker.io/library/nginx@sha256:96c43ba316370e0c1d1810b9693e647cc62a172a842d77888c299f3944922491
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "200984127"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-721395
size: "4788229"
- id: 95fa2d456df40912322321a0f3238e5a1d6270493b78a99588fb6c9e8cc66ed3
repoDigests:
- localhost/minikube-local-cache-test@sha256:94a10d53044a32452eb39093705cd92ce842824c05e5917aed255107386db173
repoTags:
- localhost/minikube-local-cache-test:functional-721395
size: "3330"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
- registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "92632544"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-721395 image ls --format yaml --alsologtostderr:
I1007 10:56:35.435486  932555 out.go:345] Setting OutFile to fd 1 ...
I1007 10:56:35.435630  932555 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 10:56:35.435636  932555 out.go:358] Setting ErrFile to fd 2...
I1007 10:56:35.435641  932555 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 10:56:35.435889  932555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-891319/.minikube/bin
I1007 10:56:35.436665  932555 config.go:182] Loaded profile config "functional-721395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 10:56:35.436803  932555 config.go:182] Loaded profile config "functional-721395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 10:56:35.437274  932555 cli_runner.go:164] Run: docker container inspect functional-721395 --format={{.State.Status}}
I1007 10:56:35.462080  932555 ssh_runner.go:195] Run: systemctl --version
I1007 10:56:35.462139  932555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-721395
I1007 10:56:35.500096  932555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/functional-721395/id_rsa Username:docker}
I1007 10:56:35.598251  932555 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-721395 ssh pgrep buildkitd: exit status 1 (315.882725ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 image build -t localhost/my-image:functional-721395 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-721395 image build -t localhost/my-image:functional-721395 testdata/build --alsologtostderr: (2.982437935s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-721395 image build -t localhost/my-image:functional-721395 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 38373e17691
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-721395
--> a0bd4cf7893
Successfully tagged localhost/my-image:functional-721395
a0bd4cf7893667f87d480c3ecd44374691e73f655045d124135d70cd1d0f8a6c
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-721395 image build -t localhost/my-image:functional-721395 testdata/build --alsologtostderr:
I1007 10:56:36.046054  932711 out.go:345] Setting OutFile to fd 1 ...
I1007 10:56:36.046762  932711 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 10:56:36.046775  932711 out.go:358] Setting ErrFile to fd 2...
I1007 10:56:36.046782  932711 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 10:56:36.047240  932711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-891319/.minikube/bin
I1007 10:56:36.049435  932711 config.go:182] Loaded profile config "functional-721395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 10:56:36.050220  932711 config.go:182] Loaded profile config "functional-721395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 10:56:36.050810  932711 cli_runner.go:164] Run: docker container inspect functional-721395 --format={{.State.Status}}
I1007 10:56:36.072642  932711 ssh_runner.go:195] Run: systemctl --version
I1007 10:56:36.072702  932711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-721395
I1007 10:56:36.093153  932711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33891 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/functional-721395/id_rsa Username:docker}
I1007 10:56:36.188823  932711 build_images.go:161] Building image from path: /tmp/build.599467853.tar
I1007 10:56:36.188892  932711 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1007 10:56:36.198552  932711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.599467853.tar
I1007 10:56:36.202498  932711 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.599467853.tar: stat -c "%s %y" /var/lib/minikube/build/build.599467853.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.599467853.tar': No such file or directory
I1007 10:56:36.202527  932711 ssh_runner.go:362] scp /tmp/build.599467853.tar --> /var/lib/minikube/build/build.599467853.tar (3072 bytes)
I1007 10:56:36.229168  932711 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.599467853
I1007 10:56:36.242982  932711 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.599467853 -xf /var/lib/minikube/build/build.599467853.tar
I1007 10:56:36.252199  932711 crio.go:315] Building image: /var/lib/minikube/build/build.599467853
I1007 10:56:36.252302  932711 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-721395 /var/lib/minikube/build/build.599467853 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1007 10:56:38.925748  932711 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-721395 /var/lib/minikube/build/build.599467853 --cgroup-manager=cgroupfs: (2.673411518s)
I1007 10:56:38.925812  932711 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.599467853
I1007 10:56:38.934857  932711 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.599467853.tar
I1007 10:56:38.943628  932711 build_images.go:217] Built localhost/my-image:functional-721395 from /tmp/build.599467853.tar
I1007 10:56:38.943667  932711 build_images.go:133] succeeded building to: functional-721395
I1007 10:56:38.943673  932711 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-721395
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 image load --daemon kicbase/echo-server:functional-721395 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-721395 image load --daemon kicbase/echo-server:functional-721395 --alsologtostderr: (1.353917105s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 image load --daemon kicbase/echo-server:functional-721395 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-721395
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 image load --daemon kicbase/echo-server:functional-721395 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 image save kicbase/echo-server:functional-721395 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 image rm kicbase/echo-server:functional-721395 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-721395
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-721395 image save --daemon kicbase/echo-server:functional-721395 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-721395
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-721395
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-721395
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-721395
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (170.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-685971 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1007 10:57:31.403285  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:57:31.409611  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:57:31.421086  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:57:31.442473  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:57:31.483856  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:57:31.565218  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:57:31.726662  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:57:32.048316  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:57:32.690329  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:57:33.971683  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:57:36.533017  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:57:41.654324  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:57:51.895984  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:58:12.378064  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
E1007 10:58:53.339406  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-685971 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m49.978478992s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (170.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-685971 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-685971 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-685971 -- rollout status deployment/busybox: (5.8023405s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-685971 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-685971 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-685971 -- exec busybox-7dff88458-qfq6v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-685971 -- exec busybox-7dff88458-shl4p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-685971 -- exec busybox-7dff88458-w84nw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-685971 -- exec busybox-7dff88458-qfq6v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-685971 -- exec busybox-7dff88458-shl4p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-685971 -- exec busybox-7dff88458-w84nw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-685971 -- exec busybox-7dff88458-qfq6v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-685971 -- exec busybox-7dff88458-shl4p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-685971 -- exec busybox-7dff88458-w84nw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-685971 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-685971 -- exec busybox-7dff88458-qfq6v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-685971 -- exec busybox-7dff88458-qfq6v -- sh -c "ping -c 1 192.168.58.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-685971 -- exec busybox-7dff88458-shl4p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-685971 -- exec busybox-7dff88458-shl4p -- sh -c "ping -c 1 192.168.58.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-685971 -- exec busybox-7dff88458-w84nw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-685971 -- exec busybox-7dff88458-w84nw -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (38.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-685971 -v=7 --alsologtostderr
E1007 11:00:15.261279  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-685971 -v=7 --alsologtostderr: (37.139857285s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-685971 status -v=7 --alsologtostderr: (1.03150485s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (38.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-685971 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 cp testdata/cp-test.txt ha-685971:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 cp ha-685971:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2126305494/001/cp-test_ha-685971.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 cp ha-685971:/home/docker/cp-test.txt ha-685971-m02:/home/docker/cp-test_ha-685971_ha-685971-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m02 "sudo cat /home/docker/cp-test_ha-685971_ha-685971-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 cp ha-685971:/home/docker/cp-test.txt ha-685971-m03:/home/docker/cp-test_ha-685971_ha-685971-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m03 "sudo cat /home/docker/cp-test_ha-685971_ha-685971-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 cp ha-685971:/home/docker/cp-test.txt ha-685971-m04:/home/docker/cp-test_ha-685971_ha-685971-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m04 "sudo cat /home/docker/cp-test_ha-685971_ha-685971-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 cp testdata/cp-test.txt ha-685971-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 cp ha-685971-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2126305494/001/cp-test_ha-685971-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 cp ha-685971-m02:/home/docker/cp-test.txt ha-685971:/home/docker/cp-test_ha-685971-m02_ha-685971.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971 "sudo cat /home/docker/cp-test_ha-685971-m02_ha-685971.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 cp ha-685971-m02:/home/docker/cp-test.txt ha-685971-m03:/home/docker/cp-test_ha-685971-m02_ha-685971-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m03 "sudo cat /home/docker/cp-test_ha-685971-m02_ha-685971-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 cp ha-685971-m02:/home/docker/cp-test.txt ha-685971-m04:/home/docker/cp-test_ha-685971-m02_ha-685971-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m04 "sudo cat /home/docker/cp-test_ha-685971-m02_ha-685971-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 cp testdata/cp-test.txt ha-685971-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 cp ha-685971-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2126305494/001/cp-test_ha-685971-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 cp ha-685971-m03:/home/docker/cp-test.txt ha-685971:/home/docker/cp-test_ha-685971-m03_ha-685971.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971 "sudo cat /home/docker/cp-test_ha-685971-m03_ha-685971.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 cp ha-685971-m03:/home/docker/cp-test.txt ha-685971-m02:/home/docker/cp-test_ha-685971-m03_ha-685971-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m02 "sudo cat /home/docker/cp-test_ha-685971-m03_ha-685971-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 cp ha-685971-m03:/home/docker/cp-test.txt ha-685971-m04:/home/docker/cp-test_ha-685971-m03_ha-685971-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m04 "sudo cat /home/docker/cp-test_ha-685971-m03_ha-685971-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 cp testdata/cp-test.txt ha-685971-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 cp ha-685971-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2126305494/001/cp-test_ha-685971-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 cp ha-685971-m04:/home/docker/cp-test.txt ha-685971:/home/docker/cp-test_ha-685971-m04_ha-685971.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971 "sudo cat /home/docker/cp-test_ha-685971-m04_ha-685971.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 cp ha-685971-m04:/home/docker/cp-test.txt ha-685971-m02:/home/docker/cp-test_ha-685971-m04_ha-685971-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m02 "sudo cat /home/docker/cp-test_ha-685971-m04_ha-685971-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 cp ha-685971-m04:/home/docker/cp-test.txt ha-685971-m03:/home/docker/cp-test_ha-685971-m04_ha-685971-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 ssh -n ha-685971-m03 "sudo cat /home/docker/cp-test_ha-685971-m04_ha-685971-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 node stop m02 -v=7 --alsologtostderr
E1007 11:00:44.824455  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:00:44.830913  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:00:44.842205  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:00:44.863652  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:00:44.906191  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:00:44.987600  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:00:45.149621  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:00:45.471203  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:00:46.112871  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:00:47.394167  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:00:49.956657  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-685971 node stop m02 -v=7 --alsologtostderr: (12.044310116s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-685971 status -v=7 --alsologtostderr: exit status 7 (734.628682ms)

                                                
                                                
-- stdout --
	ha-685971
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-685971-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-685971-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-685971-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 11:00:53.601244  948383 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:00:53.601399  948383 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:00:53.601410  948383 out.go:358] Setting ErrFile to fd 2...
	I1007 11:00:53.601416  948383 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:00:53.601665  948383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-891319/.minikube/bin
	I1007 11:00:53.601850  948383 out.go:352] Setting JSON to false
	I1007 11:00:53.601875  948383 mustload.go:65] Loading cluster: ha-685971
	I1007 11:00:53.601921  948383 notify.go:220] Checking for updates...
	I1007 11:00:53.602273  948383 config.go:182] Loaded profile config "ha-685971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:00:53.602349  948383 status.go:174] checking status of ha-685971 ...
	I1007 11:00:53.603163  948383 cli_runner.go:164] Run: docker container inspect ha-685971 --format={{.State.Status}}
	I1007 11:00:53.622956  948383 status.go:371] ha-685971 host status = "Running" (err=<nil>)
	I1007 11:00:53.622984  948383 host.go:66] Checking if "ha-685971" exists ...
	I1007 11:00:53.623385  948383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-685971
	I1007 11:00:53.652412  948383 host.go:66] Checking if "ha-685971" exists ...
	I1007 11:00:53.652746  948383 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 11:00:53.652816  948383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971
	I1007 11:00:53.669783  948383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33896 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/ha-685971/id_rsa Username:docker}
	I1007 11:00:53.767060  948383 ssh_runner.go:195] Run: systemctl --version
	I1007 11:00:53.771692  948383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 11:00:53.783154  948383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 11:00:53.833891  948383 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:5 ContainersRunning:4 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:56 OomKillDisable:true NGoroutines:81 SystemTime:2024-10-07 11:00:53.824437051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 11:00:53.834530  948383 kubeconfig.go:125] found "ha-685971" server: "https://192.168.58.254:8443"
	I1007 11:00:53.834571  948383 api_server.go:166] Checking apiserver status ...
	I1007 11:00:53.834622  948383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 11:00:53.845886  948383 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1437/cgroup
	I1007 11:00:53.855826  948383 api_server.go:182] apiserver freezer: "4:freezer:/docker/8be7d7009bbdc17b7dbeb3d71a28fc053eeec09e04ffd4d2a1c56430f01cda88/crio/crio-7d7bda623c13ab5c1028f69340c0683fdbe3e1bd33d1a5685538a7a89db05787"
	I1007 11:00:53.855895  948383 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8be7d7009bbdc17b7dbeb3d71a28fc053eeec09e04ffd4d2a1c56430f01cda88/crio/crio-7d7bda623c13ab5c1028f69340c0683fdbe3e1bd33d1a5685538a7a89db05787/freezer.state
	I1007 11:00:53.867126  948383 api_server.go:204] freezer state: "THAWED"
	I1007 11:00:53.867154  948383 api_server.go:253] Checking apiserver healthz at https://192.168.58.254:8443/healthz ...
	I1007 11:00:53.876699  948383 api_server.go:279] https://192.168.58.254:8443/healthz returned 200:
	ok
	I1007 11:00:53.876732  948383 status.go:463] ha-685971 apiserver status = Running (err=<nil>)
	I1007 11:00:53.876743  948383 status.go:176] ha-685971 status: &{Name:ha-685971 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 11:00:53.876799  948383 status.go:174] checking status of ha-685971-m02 ...
	I1007 11:00:53.877148  948383 cli_runner.go:164] Run: docker container inspect ha-685971-m02 --format={{.State.Status}}
	I1007 11:00:53.893541  948383 status.go:371] ha-685971-m02 host status = "Stopped" (err=<nil>)
	I1007 11:00:53.893567  948383 status.go:384] host is not running, skipping remaining checks
	I1007 11:00:53.893574  948383 status.go:176] ha-685971-m02 status: &{Name:ha-685971-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 11:00:53.893609  948383 status.go:174] checking status of ha-685971-m03 ...
	I1007 11:00:53.893901  948383 cli_runner.go:164] Run: docker container inspect ha-685971-m03 --format={{.State.Status}}
	I1007 11:00:53.912359  948383 status.go:371] ha-685971-m03 host status = "Running" (err=<nil>)
	I1007 11:00:53.912382  948383 host.go:66] Checking if "ha-685971-m03" exists ...
	I1007 11:00:53.912680  948383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-685971-m03
	I1007 11:00:53.931284  948383 host.go:66] Checking if "ha-685971-m03" exists ...
	I1007 11:00:53.931590  948383 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 11:00:53.931635  948383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971-m03
	I1007 11:00:53.948632  948383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33906 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/ha-685971-m03/id_rsa Username:docker}
	I1007 11:00:54.046475  948383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 11:00:54.065179  948383 kubeconfig.go:125] found "ha-685971" server: "https://192.168.58.254:8443"
	I1007 11:00:54.065210  948383 api_server.go:166] Checking apiserver status ...
	I1007 11:00:54.065259  948383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 11:00:54.079028  948383 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1333/cgroup
	I1007 11:00:54.089964  948383 api_server.go:182] apiserver freezer: "4:freezer:/docker/8657ce63685491ca9f40a5571de3ac5d54899297e8e231e1a56df2b5b81d65fa/crio/crio-2102afff015c8106b036489c87df02c22d83cb14ff571cc5c3a3f12ae8be9906"
	I1007 11:00:54.090046  948383 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8657ce63685491ca9f40a5571de3ac5d54899297e8e231e1a56df2b5b81d65fa/crio/crio-2102afff015c8106b036489c87df02c22d83cb14ff571cc5c3a3f12ae8be9906/freezer.state
	I1007 11:00:54.100331  948383 api_server.go:204] freezer state: "THAWED"
	I1007 11:00:54.100364  948383 api_server.go:253] Checking apiserver healthz at https://192.168.58.254:8443/healthz ...
	I1007 11:00:54.108568  948383 api_server.go:279] https://192.168.58.254:8443/healthz returned 200:
	ok
	I1007 11:00:54.108602  948383 status.go:463] ha-685971-m03 apiserver status = Running (err=<nil>)
	I1007 11:00:54.108613  948383 status.go:176] ha-685971-m03 status: &{Name:ha-685971-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 11:00:54.108661  948383 status.go:174] checking status of ha-685971-m04 ...
	I1007 11:00:54.109026  948383 cli_runner.go:164] Run: docker container inspect ha-685971-m04 --format={{.State.Status}}
	I1007 11:00:54.127570  948383 status.go:371] ha-685971-m04 host status = "Running" (err=<nil>)
	I1007 11:00:54.127597  948383 host.go:66] Checking if "ha-685971-m04" exists ...
	I1007 11:00:54.127964  948383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-685971-m04
	I1007 11:00:54.145535  948383 host.go:66] Checking if "ha-685971-m04" exists ...
	I1007 11:00:54.145881  948383 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 11:00:54.145924  948383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-685971-m04
	I1007 11:00:54.163424  948383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33911 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/ha-685971-m04/id_rsa Username:docker}
	I1007 11:00:54.257580  948383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 11:00:54.273839  948383 status.go:176] ha-685971-m04 status: &{Name:ha-685971-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (22.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 node start m02 -v=7 --alsologtostderr
E1007 11:00:55.078293  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:01:05.320102  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-685971 node start m02 -v=7 --alsologtostderr: (20.669497651s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-685971 status -v=7 --alsologtostderr: (1.210003359s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (22.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.098979921s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (192.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-685971 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-685971 -v=7 --alsologtostderr
E1007 11:01:25.802061  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-685971 -v=7 --alsologtostderr: (36.838367848s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-685971 --wait=true -v=7 --alsologtostderr
E1007 11:02:06.763862  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:02:31.406281  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:02:59.102666  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:03:28.685240  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-685971 --wait=true -v=7 --alsologtostderr: (2m35.289463242s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-685971
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (192.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-685971 node delete m03 -v=7 --alsologtostderr: (11.476737774s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-685971 stop -v=7 --alsologtostderr: (35.813267717s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-685971 status -v=7 --alsologtostderr: exit status 7 (110.977633ms)

                                                
                                                
-- stdout --
	ha-685971
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-685971-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-685971-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 11:05:19.483941  962819 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:05:19.484125  962819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:05:19.484155  962819 out.go:358] Setting ErrFile to fd 2...
	I1007 11:05:19.484187  962819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:05:19.484575  962819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-891319/.minikube/bin
	I1007 11:05:19.484829  962819 out.go:352] Setting JSON to false
	I1007 11:05:19.484877  962819 mustload.go:65] Loading cluster: ha-685971
	I1007 11:05:19.485616  962819 config.go:182] Loaded profile config "ha-685971": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:05:19.485662  962819 status.go:174] checking status of ha-685971 ...
	I1007 11:05:19.486265  962819 notify.go:220] Checking for updates...
	I1007 11:05:19.486480  962819 cli_runner.go:164] Run: docker container inspect ha-685971 --format={{.State.Status}}
	I1007 11:05:19.502427  962819 status.go:371] ha-685971 host status = "Stopped" (err=<nil>)
	I1007 11:05:19.502450  962819 status.go:384] host is not running, skipping remaining checks
	I1007 11:05:19.502456  962819 status.go:176] ha-685971 status: &{Name:ha-685971 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 11:05:19.502484  962819 status.go:174] checking status of ha-685971-m02 ...
	I1007 11:05:19.502790  962819 cli_runner.go:164] Run: docker container inspect ha-685971-m02 --format={{.State.Status}}
	I1007 11:05:19.518597  962819 status.go:371] ha-685971-m02 host status = "Stopped" (err=<nil>)
	I1007 11:05:19.518620  962819 status.go:384] host is not running, skipping remaining checks
	I1007 11:05:19.518627  962819 status.go:176] ha-685971-m02 status: &{Name:ha-685971-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 11:05:19.518651  962819 status.go:174] checking status of ha-685971-m04 ...
	I1007 11:05:19.518936  962819 cli_runner.go:164] Run: docker container inspect ha-685971-m04 --format={{.State.Status}}
	I1007 11:05:19.538557  962819 status.go:371] ha-685971-m04 host status = "Stopped" (err=<nil>)
	I1007 11:05:19.538578  962819 status.go:384] host is not running, skipping remaining checks
	I1007 11:05:19.538585  962819 status.go:176] ha-685971-m04 status: &{Name:ha-685971-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (67.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-685971 --control-plane -v=7 --alsologtostderr
E1007 11:07:31.402765  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-685971 --control-plane -v=7 --alsologtostderr: (1m6.594299873s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-685971 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (67.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.58s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-906859 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-906859 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m18.57494227s)
--- PASS: TestJSONOutput/start/Command (78.58s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-906859 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-906859 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-906859 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-906859 --output=json --user=testUser: (5.861505642s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-206662 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-206662 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.721649ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d4fe3102-6c47-49f2-882d-415db63bbf1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-206662] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0b231d35-08e0-41c3-852d-6b14bde213b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19761"}}
	{"specversion":"1.0","id":"f9bc6004-dd29-4390-9d7a-210a45495182","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4cb14908-cbdd-4fda-bf5b-0b30f9dbe6f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19761-891319/kubeconfig"}}
	{"specversion":"1.0","id":"b2c5ab78-6cb8-4823-9dbd-2c87c6074018","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-891319/.minikube"}}
	{"specversion":"1.0","id":"c1d8d12f-3cd6-4ca9-a06b-7e02f55de0e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a46784c6-3bd8-4331-bf2e-29cb6c1c2ddf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e077e406-fc28-421d-9c44-0929efe9b642","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-206662" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-206662
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.42s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-450435 --network=
E1007 11:10:44.824849  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-450435 --network=: (34.362271497s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-450435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-450435
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-450435: (2.030274003s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.42s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.14s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-865264 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-865264 --network=bridge: (32.128816743s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-865264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-865264
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-865264: (1.99603093s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.14s)

                                                
                                    
x
+
TestKicExistingNetwork (34.32s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1007 11:11:31.095703  896726 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1007 11:11:31.110882  896726 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1007 11:11:31.110970  896726 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1007 11:11:31.110993  896726 cli_runner.go:164] Run: docker network inspect existing-network
W1007 11:11:31.127196  896726 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1007 11:11:31.127228  896726 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1007 11:11:31.127245  896726 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1007 11:11:31.127367  896726 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1007 11:11:31.144375  896726 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fa98f111c271 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:cf:52:8b:17} reservation:<nil>}
I1007 11:11:31.144831  896726 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-93be4a3fd51e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:e9:bf:ba:cf} reservation:<nil>}
I1007 11:11:31.146747  896726 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d45c10}
I1007 11:11:31.146822  896726 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I1007 11:11:31.146880  896726 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1007 11:11:31.216752  896726 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-776689 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-776689 --network=existing-network: (32.156653831s)
helpers_test.go:175: Cleaning up "existing-network-776689" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-776689
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-776689: (2.012196242s)
I1007 11:12:05.402634  896726 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.32s)

                                                
                                    
x
+
TestKicCustomSubnet (31.17s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-108059 --subnet=192.168.60.0/24
E1007 11:12:31.403107  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-108059 --subnet=192.168.60.0/24: (29.152714289s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-108059 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-108059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-108059
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-108059: (1.997706349s)
--- PASS: TestKicCustomSubnet (31.17s)

                                                
                                    
x
+
TestKicStaticIP (33.52s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-330828 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-330828 --static-ip=192.168.200.200: (31.271307706s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-330828 ip
helpers_test.go:175: Cleaning up "static-ip-330828" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-330828
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-330828: (2.096730578s)
--- PASS: TestKicStaticIP (33.52s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (67.15s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-291869 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-291869 --driver=docker  --container-runtime=crio: (28.896570028s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-295069 --driver=docker  --container-runtime=crio
E1007 11:13:54.465325  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-295069 --driver=docker  --container-runtime=crio: (33.094106656s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-291869
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-295069
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-295069" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-295069
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-295069: (1.950628538s)
helpers_test.go:175: Cleaning up "first-291869" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-291869
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-291869: (1.894865562s)
--- PASS: TestMinikubeProfile (67.15s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-822842 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-822842 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.900773911s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-822842 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-824852 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-824852 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.268507835s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-824852 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-822842 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-822842 --alsologtostderr -v=5: (1.612247517s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-824852 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-824852
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-824852: (1.22054035s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.62s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-824852
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-824852: (6.618186067s)
--- PASS: TestMountStart/serial/RestartStopped (7.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-824852 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (104.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-881859 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1007 11:15:44.824325  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-881859 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m43.976572478s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (104.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881859 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881859 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-881859 -- rollout status deployment/busybox: (4.846860907s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881859 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881859 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881859 -- exec busybox-7dff88458-fh96z -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881859 -- exec busybox-7dff88458-wnpgh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881859 -- exec busybox-7dff88458-fh96z -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881859 -- exec busybox-7dff88458-wnpgh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881859 -- exec busybox-7dff88458-fh96z -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881859 -- exec busybox-7dff88458-wnpgh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.71s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881859 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881859 -- exec busybox-7dff88458-fh96z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881859 -- exec busybox-7dff88458-fh96z -- sh -c "ping -c 1 192.168.76.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881859 -- exec busybox-7dff88458-wnpgh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-881859 -- exec busybox-7dff88458-wnpgh -- sh -c "ping -c 1 192.168.76.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-881859 -v 3 --alsologtostderr
E1007 11:17:07.888804  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:17:31.403035  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-881859 -v 3 --alsologtostderr: (57.213451069s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.88s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-881859 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 cp testdata/cp-test.txt multinode-881859:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 ssh -n multinode-881859 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 cp multinode-881859:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1979812652/001/cp-test_multinode-881859.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 ssh -n multinode-881859 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 cp multinode-881859:/home/docker/cp-test.txt multinode-881859-m02:/home/docker/cp-test_multinode-881859_multinode-881859-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 ssh -n multinode-881859 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 ssh -n multinode-881859-m02 "sudo cat /home/docker/cp-test_multinode-881859_multinode-881859-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 cp multinode-881859:/home/docker/cp-test.txt multinode-881859-m03:/home/docker/cp-test_multinode-881859_multinode-881859-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 ssh -n multinode-881859 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 ssh -n multinode-881859-m03 "sudo cat /home/docker/cp-test_multinode-881859_multinode-881859-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 cp testdata/cp-test.txt multinode-881859-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 ssh -n multinode-881859-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 cp multinode-881859-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1979812652/001/cp-test_multinode-881859-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 ssh -n multinode-881859-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 cp multinode-881859-m02:/home/docker/cp-test.txt multinode-881859:/home/docker/cp-test_multinode-881859-m02_multinode-881859.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 ssh -n multinode-881859-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 ssh -n multinode-881859 "sudo cat /home/docker/cp-test_multinode-881859-m02_multinode-881859.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 cp multinode-881859-m02:/home/docker/cp-test.txt multinode-881859-m03:/home/docker/cp-test_multinode-881859-m02_multinode-881859-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 ssh -n multinode-881859-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 ssh -n multinode-881859-m03 "sudo cat /home/docker/cp-test_multinode-881859-m02_multinode-881859-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 cp testdata/cp-test.txt multinode-881859-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 ssh -n multinode-881859-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 cp multinode-881859-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1979812652/001/cp-test_multinode-881859-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 ssh -n multinode-881859-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 cp multinode-881859-m03:/home/docker/cp-test.txt multinode-881859:/home/docker/cp-test_multinode-881859-m03_multinode-881859.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 ssh -n multinode-881859-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 ssh -n multinode-881859 "sudo cat /home/docker/cp-test_multinode-881859-m03_multinode-881859.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 cp multinode-881859-m03:/home/docker/cp-test.txt multinode-881859-m02:/home/docker/cp-test_multinode-881859-m03_multinode-881859-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 ssh -n multinode-881859-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 ssh -n multinode-881859-m02 "sudo cat /home/docker/cp-test_multinode-881859-m03_multinode-881859-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-881859 node stop m03: (1.221428287s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-881859 status: exit status 7 (505.868539ms)

                                                
                                                
-- stdout --
	multinode-881859
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-881859-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-881859-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-881859 status --alsologtostderr: exit status 7 (530.857925ms)

                                                
                                                
-- stdout --
	multinode-881859
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-881859-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-881859-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 11:17:49.672828 1017238 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:17:49.673036 1017238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:17:49.673063 1017238 out.go:358] Setting ErrFile to fd 2...
	I1007 11:17:49.673081 1017238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:17:49.673372 1017238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-891319/.minikube/bin
	I1007 11:17:49.673598 1017238 out.go:352] Setting JSON to false
	I1007 11:17:49.673654 1017238 mustload.go:65] Loading cluster: multinode-881859
	I1007 11:17:49.673686 1017238 notify.go:220] Checking for updates...
	I1007 11:17:49.674130 1017238 config.go:182] Loaded profile config "multinode-881859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:17:49.674452 1017238 status.go:174] checking status of multinode-881859 ...
	I1007 11:17:49.675231 1017238 cli_runner.go:164] Run: docker container inspect multinode-881859 --format={{.State.Status}}
	I1007 11:17:49.693904 1017238 status.go:371] multinode-881859 host status = "Running" (err=<nil>)
	I1007 11:17:49.693933 1017238 host.go:66] Checking if "multinode-881859" exists ...
	I1007 11:17:49.694297 1017238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-881859
	I1007 11:17:49.720019 1017238 host.go:66] Checking if "multinode-881859" exists ...
	I1007 11:17:49.720381 1017238 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 11:17:49.720439 1017238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-881859
	I1007 11:17:49.738705 1017238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34017 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/multinode-881859/id_rsa Username:docker}
	I1007 11:17:49.835580 1017238 ssh_runner.go:195] Run: systemctl --version
	I1007 11:17:49.840097 1017238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 11:17:49.851799 1017238 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 11:17:49.907441 1017238 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:71 SystemTime:2024-10-07 11:17:49.897056876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 11:17:49.908032 1017238 kubeconfig.go:125] found "multinode-881859" server: "https://192.168.76.2:8443"
	I1007 11:17:49.908073 1017238 api_server.go:166] Checking apiserver status ...
	I1007 11:17:49.908121 1017238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 11:17:49.919101 1017238 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1425/cgroup
	I1007 11:17:49.928402 1017238 api_server.go:182] apiserver freezer: "4:freezer:/docker/dde0b198ffdc0914eca4232dcab993a7e6a26c2640bbed4bd459883a50ba5c9e/crio/crio-3ac2f31dbcc861b15493fb0b60350a92c63f8e8343cfdd2f4981cf798fabb0af"
	I1007 11:17:49.928481 1017238 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/dde0b198ffdc0914eca4232dcab993a7e6a26c2640bbed4bd459883a50ba5c9e/crio/crio-3ac2f31dbcc861b15493fb0b60350a92c63f8e8343cfdd2f4981cf798fabb0af/freezer.state
	I1007 11:17:49.937311 1017238 api_server.go:204] freezer state: "THAWED"
	I1007 11:17:49.937340 1017238 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1007 11:17:49.944993 1017238 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1007 11:17:49.945027 1017238 status.go:463] multinode-881859 apiserver status = Running (err=<nil>)
	I1007 11:17:49.945062 1017238 status.go:176] multinode-881859 status: &{Name:multinode-881859 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 11:17:49.945093 1017238 status.go:174] checking status of multinode-881859-m02 ...
	I1007 11:17:49.945381 1017238 cli_runner.go:164] Run: docker container inspect multinode-881859-m02 --format={{.State.Status}}
	I1007 11:17:49.966595 1017238 status.go:371] multinode-881859-m02 host status = "Running" (err=<nil>)
	I1007 11:17:49.966621 1017238 host.go:66] Checking if "multinode-881859-m02" exists ...
	I1007 11:17:49.966936 1017238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-881859-m02
	I1007 11:17:49.983209 1017238 host.go:66] Checking if "multinode-881859-m02" exists ...
	I1007 11:17:49.983512 1017238 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 11:17:49.983565 1017238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-881859-m02
	I1007 11:17:50.000792 1017238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34022 SSHKeyPath:/home/jenkins/minikube-integration/19761-891319/.minikube/machines/multinode-881859-m02/id_rsa Username:docker}
	I1007 11:17:50.101853 1017238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 11:17:50.114023 1017238 status.go:176] multinode-881859-m02 status: &{Name:multinode-881859-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1007 11:17:50.114061 1017238 status.go:174] checking status of multinode-881859-m03 ...
	I1007 11:17:50.114427 1017238 cli_runner.go:164] Run: docker container inspect multinode-881859-m03 --format={{.State.Status}}
	I1007 11:17:50.137728 1017238 status.go:371] multinode-881859-m03 host status = "Stopped" (err=<nil>)
	I1007 11:17:50.137754 1017238 status.go:384] host is not running, skipping remaining checks
	I1007 11:17:50.137761 1017238 status.go:176] multinode-881859-m03 status: &{Name:multinode-881859-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-881859 node start m03 -v=7 --alsologtostderr: (9.27305103s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.26s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (96.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-881859
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-881859
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-881859: (24.849988747s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-881859 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-881859 --wait=true -v=8 --alsologtostderr: (1m11.098832836s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-881859
--- PASS: TestMultiNode/serial/RestartKeepsNodes (96.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-881859 node delete m03: (4.782879109s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.45s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-881859 stop: (23.704327746s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-881859 status: exit status 7 (105.367959ms)

                                                
                                                
-- stdout --
	multinode-881859
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-881859-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-881859 status --alsologtostderr: exit status 7 (96.178236ms)

                                                
                                                
-- stdout --
	multinode-881859
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-881859-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 11:20:05.797954 1024972 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:20:05.798086 1024972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:20:05.798095 1024972 out.go:358] Setting ErrFile to fd 2...
	I1007 11:20:05.798100 1024972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:20:05.798352 1024972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-891319/.minikube/bin
	I1007 11:20:05.798535 1024972 out.go:352] Setting JSON to false
	I1007 11:20:05.798565 1024972 mustload.go:65] Loading cluster: multinode-881859
	I1007 11:20:05.798665 1024972 notify.go:220] Checking for updates...
	I1007 11:20:05.798982 1024972 config.go:182] Loaded profile config "multinode-881859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:20:05.798997 1024972 status.go:174] checking status of multinode-881859 ...
	I1007 11:20:05.799500 1024972 cli_runner.go:164] Run: docker container inspect multinode-881859 --format={{.State.Status}}
	I1007 11:20:05.817562 1024972 status.go:371] multinode-881859 host status = "Stopped" (err=<nil>)
	I1007 11:20:05.817581 1024972 status.go:384] host is not running, skipping remaining checks
	I1007 11:20:05.817588 1024972 status.go:176] multinode-881859 status: &{Name:multinode-881859 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 11:20:05.817624 1024972 status.go:174] checking status of multinode-881859-m02 ...
	I1007 11:20:05.817917 1024972 cli_runner.go:164] Run: docker container inspect multinode-881859-m02 --format={{.State.Status}}
	I1007 11:20:05.842169 1024972 status.go:371] multinode-881859-m02 host status = "Stopped" (err=<nil>)
	I1007 11:20:05.842189 1024972 status.go:384] host is not running, skipping remaining checks
	I1007 11:20:05.842196 1024972 status.go:176] multinode-881859-m02 status: &{Name:multinode-881859-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-881859 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1007 11:20:44.824933  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-881859 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (53.920329012s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-881859 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.82s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-881859
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-881859-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-881859-m02 --driver=docker  --container-runtime=crio: exit status 14 (83.311173ms)

                                                
                                                
-- stdout --
	* [multinode-881859-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19761-891319/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-891319/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-881859-m02' is duplicated with machine name 'multinode-881859-m02' in profile 'multinode-881859'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-881859-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-881859-m03 --driver=docker  --container-runtime=crio: (29.468339585s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-881859
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-881859: exit status 80 (328.076444ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-881859 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-881859-m03 already exists in multinode-881859-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-881859-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-881859-m03: (1.945930255s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.89s)

                                                
                                    
x
+
TestPreload (123.8s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-582584 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1007 11:22:31.404407  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-582584 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m33.256729151s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-582584 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-582584 image pull gcr.io/k8s-minikube/busybox: (3.13636505s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-582584
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-582584: (5.73667072s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-582584 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-582584 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (18.99071702s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-582584 image list
helpers_test.go:175: Cleaning up "test-preload-582584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-582584
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-582584: (2.369139731s)
--- PASS: TestPreload (123.80s)

                                                
                                    
x
+
TestScheduledStopUnix (108.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-875828 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-875828 --memory=2048 --driver=docker  --container-runtime=crio: (32.119820933s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-875828 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-875828 -n scheduled-stop-875828
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-875828 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1007 11:24:12.973295  896726 retry.go:31] will retry after 71.012µs: open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/scheduled-stop-875828/pid: no such file or directory
I1007 11:24:12.974405  896726 retry.go:31] will retry after 165.385µs: open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/scheduled-stop-875828/pid: no such file or directory
I1007 11:24:12.975522  896726 retry.go:31] will retry after 176.3µs: open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/scheduled-stop-875828/pid: no such file or directory
I1007 11:24:12.976644  896726 retry.go:31] will retry after 342.845µs: open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/scheduled-stop-875828/pid: no such file or directory
I1007 11:24:12.977768  896726 retry.go:31] will retry after 527.214µs: open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/scheduled-stop-875828/pid: no such file or directory
I1007 11:24:12.980703  896726 retry.go:31] will retry after 539.629µs: open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/scheduled-stop-875828/pid: no such file or directory
I1007 11:24:12.982640  896726 retry.go:31] will retry after 1.401365ms: open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/scheduled-stop-875828/pid: no such file or directory
I1007 11:24:12.984323  896726 retry.go:31] will retry after 2.219372ms: open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/scheduled-stop-875828/pid: no such file or directory
I1007 11:24:12.987501  896726 retry.go:31] will retry after 3.081476ms: open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/scheduled-stop-875828/pid: no such file or directory
I1007 11:24:12.991666  896726 retry.go:31] will retry after 1.981043ms: open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/scheduled-stop-875828/pid: no such file or directory
I1007 11:24:12.993859  896726 retry.go:31] will retry after 3.283934ms: open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/scheduled-stop-875828/pid: no such file or directory
I1007 11:24:12.998061  896726 retry.go:31] will retry after 9.558904ms: open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/scheduled-stop-875828/pid: no such file or directory
I1007 11:24:13.008452  896726 retry.go:31] will retry after 16.788475ms: open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/scheduled-stop-875828/pid: no such file or directory
I1007 11:24:13.025681  896726 retry.go:31] will retry after 12.464694ms: open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/scheduled-stop-875828/pid: no such file or directory
I1007 11:24:13.039798  896726 retry.go:31] will retry after 31.687319ms: open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/scheduled-stop-875828/pid: no such file or directory
I1007 11:24:13.072036  896726 retry.go:31] will retry after 38.377033ms: open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/scheduled-stop-875828/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-875828 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-875828 -n scheduled-stop-875828
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-875828
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-875828 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-875828
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-875828: exit status 7 (76.966016ms)

                                                
                                                
-- stdout --
	scheduled-stop-875828
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-875828 -n scheduled-stop-875828
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-875828 -n scheduled-stop-875828: exit status 7 (80.511083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-875828" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-875828
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-875828: (5.047891923s)
--- PASS: TestScheduledStopUnix (108.76s)

                                                
                                    
x
+
TestInsufficientStorage (10.35s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-600964 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-600964 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.840087991s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6dc1a289-16cf-4764-b20b-0bd1559ceb65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-600964] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"73bc6af5-a587-44e7-ac34-6283de598d12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19761"}}
	{"specversion":"1.0","id":"c5c1e46c-4c43-4394-9408-198bc1c7f505","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"125362a3-4863-4ed0-875a-ab37e1f01615","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19761-891319/kubeconfig"}}
	{"specversion":"1.0","id":"7cc0a6e6-34e1-4737-86ed-b32b585286df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-891319/.minikube"}}
	{"specversion":"1.0","id":"e6bb3edf-7d03-4fb5-acc7-9cbfed4fde29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"0a6fdac3-9a1f-4b88-b5d5-dd22cdb283df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"633553bb-adfe-449f-90da-2ce7bac5b70e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"bb96ee66-ca20-46bd-ba24-e742d7e86e6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"6cc4d21e-e50b-45ef-954b-9d7c75a86402","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"da829743-6954-47ba-ba9a-293dd9ebb9de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"fd9f570a-3da4-4c92-afbc-b2bf33d0570a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-600964\" primary control-plane node in \"insufficient-storage-600964\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"53f368e2-0eac-4a12-a538-cb9cf007afab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727731891-master ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"720bc9f6-1634-401f-8e17-7a34d83b7178","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"735c0f56-eddc-4c0b-944c-1b1e861a3688","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-600964 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-600964 --output=json --layout=cluster: exit status 7 (298.732996ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-600964","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-600964","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 11:25:37.233645 1042628 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-600964" does not appear in /home/jenkins/minikube-integration/19761-891319/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-600964 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-600964 --output=json --layout=cluster: exit status 7 (297.310844ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-600964","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-600964","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 11:25:37.530416 1042689 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-600964" does not appear in /home/jenkins/minikube-integration/19761-891319/kubeconfig
	E1007 11:25:37.540658 1042689 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/insufficient-storage-600964/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-600964" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-600964
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-600964: (1.910683107s)
--- PASS: TestInsufficientStorage (10.35s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (107.72s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3626190271 start -p running-upgrade-555662 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3626190271 start -p running-upgrade-555662 --memory=2200 --vm-driver=docker  --container-runtime=crio: (36.52708201s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-555662 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1007 11:30:34.466653  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:30:44.824232  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-555662 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m7.755835176s)
helpers_test.go:175: Cleaning up "running-upgrade-555662" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-555662
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-555662: (2.836962262s)
--- PASS: TestRunningBinaryUpgrade (107.72s)

                                                
                                    
x
+
TestKubernetesUpgrade (389.44s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-578426 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-578426 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m14.643280713s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-578426
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-578426: (1.376358397s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-578426 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-578426 status --format={{.Host}}: exit status 7 (117.858393ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-578426 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-578426 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m35.691920273s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-578426 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-578426 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-578426 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (86.902546ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-578426] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19761-891319/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-891319/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-578426
	    minikube start -p kubernetes-upgrade-578426 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5784262 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-578426 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-578426 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-578426 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.049550799s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-578426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-578426
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-578426: (2.369224782s)
--- PASS: TestKubernetesUpgrade (389.44s)

                                                
                                    
x
+
TestMissingContainerUpgrade (161.89s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3024151236 start -p missing-upgrade-315703 --memory=2200 --driver=docker  --container-runtime=crio
E1007 11:25:44.824105  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3024151236 start -p missing-upgrade-315703 --memory=2200 --driver=docker  --container-runtime=crio: (1m24.14156269s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-315703
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-315703: (10.462420751s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-315703
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-315703 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1007 11:27:31.403146  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-315703 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.29591671s)
helpers_test.go:175: Cleaning up "missing-upgrade-315703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-315703
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-315703: (3.232602087s)
--- PASS: TestMissingContainerUpgrade (161.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-509318 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-509318 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (83.676585ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-509318] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19761-891319/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-891319/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-509318 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-509318 --driver=docker  --container-runtime=crio: (38.870267578s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-509318 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-509318 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-509318 --no-kubernetes --driver=docker  --container-runtime=crio: (4.820935502s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-509318 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-509318 status -o json: exit status 2 (417.762952ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-509318","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-509318
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-509318: (2.350998758s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-509318 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-509318 --no-kubernetes --driver=docker  --container-runtime=crio: (7.266107173s)
--- PASS: TestNoKubernetes/serial/Start (7.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-509318 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-509318 "sudo systemctl is-active --quiet service kubelet": exit status 1 (323.626891ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-509318
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-509318: (1.258687535s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-509318 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-509318 --driver=docker  --container-runtime=crio: (7.957934061s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-509318 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-509318 "sudo systemctl is-active --quiet service kubelet": exit status 1 (344.490238ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (84.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2520252787 start -p stopped-upgrade-561173 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2520252787 start -p stopped-upgrade-561173 --memory=2200 --vm-driver=docker  --container-runtime=crio: (34.893887615s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2520252787 -p stopped-upgrade-561173 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2520252787 -p stopped-upgrade-561173 stop: (2.494205766s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-561173 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-561173 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.788384225s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (84.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-561173
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-561173: (1.147857893s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                    
x
+
TestPause/serial/Start (78.53s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-862868 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1007 11:32:31.402989  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-862868 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m18.533732804s)
--- PASS: TestPause/serial/Start (78.53s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (22.89s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-862868 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-862868 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.869343883s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (22.89s)

                                                
                                    
x
+
TestPause/serial/Pause (0.99s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-862868 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.99s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-862868 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-862868 --output=json --layout=cluster: exit status 2 (403.35068ms)

                                                
                                                
-- stdout --
	{"Name":"pause-862868","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-862868","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-862868 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.80s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.98s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-862868 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.98s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.19s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-862868 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-862868 --alsologtostderr -v=5: (3.191894285s)
--- PASS: TestPause/serial/DeletePaused (3.19s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-862868
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-862868: exit status 1 (15.940278ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-862868: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-480497 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-480497 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (264.958091ms)

                                                
                                                
-- stdout --
	* [false-480497] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19761-891319/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-891319/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 11:33:29.899006 1082262 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:33:29.899213 1082262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:33:29.899236 1082262 out.go:358] Setting ErrFile to fd 2...
	I1007 11:33:29.899258 1082262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:33:29.899672 1082262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19761-891319/.minikube/bin
	I1007 11:33:29.900221 1082262 out.go:352] Setting JSON to false
	I1007 11:33:29.901760 1082262 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":26154,"bootTime":1728274656,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1007 11:33:29.901844 1082262 start.go:139] virtualization:  
	I1007 11:33:29.905603 1082262 out.go:177] * [false-480497] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 11:33:29.907391 1082262 out.go:177]   - MINIKUBE_LOCATION=19761
	I1007 11:33:29.907464 1082262 notify.go:220] Checking for updates...
	I1007 11:33:29.910884 1082262 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:33:29.912792 1082262 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19761-891319/kubeconfig
	I1007 11:33:29.914570 1082262 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19761-891319/.minikube
	I1007 11:33:29.916599 1082262 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 11:33:29.918767 1082262 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 11:33:29.921273 1082262 config.go:182] Loaded profile config "force-systemd-flag-970103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:33:29.921375 1082262 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:33:29.963643 1082262 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 11:33:29.963779 1082262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 11:33:30.098230 1082262 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-10-07 11:33:30.072585589 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 11:33:30.098358 1082262 docker.go:318] overlay module found
	I1007 11:33:30.102334 1082262 out.go:177] * Using the docker driver based on user configuration
	I1007 11:33:30.104506 1082262 start.go:297] selected driver: docker
	I1007 11:33:30.104532 1082262 start.go:901] validating driver "docker" against <nil>
	I1007 11:33:30.104547 1082262 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 11:33:30.107746 1082262 out.go:201] 
	W1007 11:33:30.109965 1082262 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1007 11:33:30.112019 1082262 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-480497 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-480497

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-480497

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-480497

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-480497

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-480497

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-480497

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-480497

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-480497

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-480497

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-480497

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-480497

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-480497" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-480497" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-480497

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-480497"

                                                
                                                
----------------------- debugLogs end: false-480497 [took: 4.280972752s] --------------------------------
helpers_test.go:175: Cleaning up "false-480497" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-480497
--- PASS: TestNetworkPlugins/group/false (4.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (147.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-105092 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E1007 11:35:44.826895  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-105092 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m27.58716678s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (147.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-105092 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b3c3b46f-cc50-4b96-8e9f-5ab581576a56] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1007 11:37:31.402783  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [b3c3b46f-cc50-4b96-8e9f-5ab581576a56] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.003627093s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-105092 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-105092 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-105092 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.188339121s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-105092 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-105092 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-105092 --alsologtostderr -v=3: (12.487499361s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-105092 -n old-k8s-version-105092
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-105092 -n old-k8s-version-105092: exit status 7 (90.635414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-105092 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (34.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-105092 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-105092 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (33.782137186s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-105092 -n old-k8s-version-105092
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (34.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-646606 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-646606 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m25.640971957s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (30.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-gfwsr" [849b7329-2654-4a4c-89ee-c4d3176ca71a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-cd95d586-gfwsr" [849b7329-2654-4a4c-89ee-c4d3176ca71a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 30.015445993s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (30.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-gfwsr" [849b7329-2654-4a4c-89ee-c4d3176ca71a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004237005s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-105092 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-105092 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-105092 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-105092 -n old-k8s-version-105092
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-105092 -n old-k8s-version-105092: exit status 2 (308.579835ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-105092 -n old-k8s-version-105092
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-105092 -n old-k8s-version-105092: exit status 2 (320.87793ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-105092 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-105092 -n old-k8s-version-105092
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-105092 -n old-k8s-version-105092
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (64.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-430551 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-430551 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m4.307029738s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (64.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-646606 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [72b3dd63-a9fd-4f68-8388-09b85579e27d] Pending
helpers_test.go:344: "busybox" [72b3dd63-a9fd-4f68-8388-09b85579e27d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [72b3dd63-a9fd-4f68-8388-09b85579e27d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.006198836s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-646606 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-646606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-646606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.246023069s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-646606 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-646606 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-646606 --alsologtostderr -v=3: (12.119683699s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-646606 -n embed-certs-646606
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-646606 -n embed-certs-646606: exit status 7 (87.065855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-646606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (290.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-646606 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-646606 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m49.776611147s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-646606 -n embed-certs-646606
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (290.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-430551 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fb0d6690-13ec-4d87-a385-ae4c7d5c25dd] Pending
helpers_test.go:344: "busybox" [fb0d6690-13ec-4d87-a385-ae4c7d5c25dd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fb0d6690-13ec-4d87-a385-ae4c7d5c25dd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.003122064s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-430551 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-430551 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-430551 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.017236185s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-430551 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-430551 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-430551 --alsologtostderr -v=3: (11.974966729s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-430551 -n no-preload-430551
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-430551 -n no-preload-430551: exit status 7 (101.339575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-430551 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-430551 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1007 11:40:44.824535  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:42:30.763433  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/old-k8s-version-105092/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:42:30.770100  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/old-k8s-version-105092/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:42:30.781508  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/old-k8s-version-105092/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:42:30.802897  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/old-k8s-version-105092/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:42:30.844280  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/old-k8s-version-105092/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:42:30.925648  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/old-k8s-version-105092/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:42:31.087966  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/old-k8s-version-105092/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:42:31.402718  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:42:31.410106  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/old-k8s-version-105092/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:42:32.051940  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/old-k8s-version-105092/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:42:33.334126  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/old-k8s-version-105092/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:42:35.896453  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/old-k8s-version-105092/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:42:41.018605  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/old-k8s-version-105092/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:42:51.260564  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/old-k8s-version-105092/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:43:11.742796  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/old-k8s-version-105092/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:43:52.705249  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/old-k8s-version-105092/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-430551 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m27.243521062s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-430551 -n no-preload-430551
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hhdbn" [daa57af7-36be-4eca-a038-6c94597d40fc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00333587s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hhdbn" [daa57af7-36be-4eca-a038-6c94597d40fc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004473649s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-646606 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-646606 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-646606 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-646606 -n embed-certs-646606
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-646606 -n embed-certs-646606: exit status 2 (325.109855ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-646606 -n embed-certs-646606
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-646606 -n embed-certs-646606: exit status 2 (328.398591ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-646606 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-646606 --alsologtostderr -v=1: (1.513088572s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-646606 -n embed-certs-646606
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-646606 -n embed-certs-646606
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-326733 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-326733 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m20.52122645s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8j4kj" [941d4bc6-891e-4bff-a75e-c994591b4cca] Running
E1007 11:45:14.628047  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/old-k8s-version-105092/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004289811s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8j4kj" [941d4bc6-891e-4bff-a75e-c994591b4cca] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00514455s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-430551 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-430551 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-430551 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-430551 -n no-preload-430551
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-430551 -n no-preload-430551: exit status 2 (441.831719ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-430551 -n no-preload-430551
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-430551 -n no-preload-430551: exit status 2 (407.692939ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-430551 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-430551 -n no-preload-430551
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-430551 -n no-preload-430551
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-793745 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1007 11:45:44.824419  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-793745 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (38.52088107s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-793745 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-793745 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-793745 --alsologtostderr -v=3: (1.214443635s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-793745 -n newest-cni-793745
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-793745 -n newest-cni-793745: exit status 7 (76.133608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-793745 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-793745 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-793745 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (17.131750587s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-793745 -n newest-cni-793745
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-326733 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [49cdd496-74f8-433f-a30a-ea5ac3e716af] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [49cdd496-74f8-433f-a30a-ea5ac3e716af] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004520718s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-326733 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-793745 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-793745 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-793745 -n newest-cni-793745
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-793745 -n newest-cni-793745: exit status 2 (330.199667ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-793745 -n newest-cni-793745
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-793745 -n newest-cni-793745: exit status 2 (332.531912ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-793745 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-793745 -n newest-cni-793745
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-793745 -n newest-cni-793745
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (81.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-480497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-480497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m21.378776182s)
--- PASS: TestNetworkPlugins/group/auto/Start (81.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-326733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-326733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.772481297s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-326733 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-326733 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-326733 --alsologtostderr -v=3: (12.229863671s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-326733 -n default-k8s-diff-port-326733
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-326733 -n default-k8s-diff-port-326733: exit status 7 (106.980477ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-326733 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (271.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-326733 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1007 11:47:14.468396  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:47:30.763405  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/old-k8s-version-105092/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:47:31.403696  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-326733 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m31.338352428s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-326733 -n default-k8s-diff-port-326733
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (271.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-480497 "pgrep -a kubelet"
I1007 11:47:54.889814  896726 config.go:182] Loaded profile config "auto-480497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-480497 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rmdbl" [1df37d69-39e4-429e-a9e5-62a00c2d9103] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1007 11:47:58.469484  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/old-k8s-version-105092/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-rmdbl" [1df37d69-39e4-429e-a9e5-62a00c2d9103] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003173114s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-480497 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-480497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-480497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (78.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-480497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-480497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m18.394386901s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (78.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-mscs5" [ad5859a7-119c-4f3f-b5b1-922617f0e2f2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004158403s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-480497 "pgrep -a kubelet"
I1007 11:49:51.530677  896726 config.go:182] Loaded profile config "kindnet-480497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-480497 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6xvfm" [5e63c2e2-61ee-4fc7-a64f-eef5146d0c6d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6xvfm" [5e63c2e2-61ee-4fc7-a64f-eef5146d0c6d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004229809s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-480497 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-480497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-480497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (65.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-480497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1007 11:50:27.892017  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:50:28.029511  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/no-preload-430551/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:50:38.271847  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/no-preload-430551/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:50:44.824385  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/functional-721395/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:50:58.753753  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/no-preload-430551/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-480497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m5.470505344s)
--- PASS: TestNetworkPlugins/group/calico/Start (65.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xx6g4" [af9dab69-2673-46f6-a0c5-98e94de126a5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004018173s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xx6g4" [af9dab69-2673-46f6-a0c5-98e94de126a5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004611237s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-326733 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-l8xt7" [c3ea5d21-4fec-457a-bdd3-5de06e16fa2d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005861944s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-326733 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-326733 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-326733 --alsologtostderr -v=1: (1.030903109s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-326733 -n default-k8s-diff-port-326733
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-326733 -n default-k8s-diff-port-326733: exit status 2 (394.533119ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-326733 -n default-k8s-diff-port-326733
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-326733 -n default-k8s-diff-port-326733: exit status 2 (346.1403ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-326733 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-326733 -n default-k8s-diff-port-326733
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-326733 -n default-k8s-diff-port-326733
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-480497 "pgrep -a kubelet"
I1007 11:51:35.356024  896726 config.go:182] Loaded profile config "calico-480497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-480497 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6v6sm" [447878e1-1729-4cd1-bd98-1118a2a854c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6v6sm" [447878e1-1729-4cd1-bd98-1118a2a854c7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.0036206s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-480497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-480497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (59.243336614s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-480497 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-480497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-480497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-480497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1007 11:52:30.763028  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/old-k8s-version-105092/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:52:31.403461  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/addons-952725/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-480497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m20.749362102s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-480497 "pgrep -a kubelet"
I1007 11:52:41.968053  896726 config.go:182] Loaded profile config "custom-flannel-480497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-480497 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-f9qsz" [74c39cd0-1725-43d2-9631-fe893fc9fe63] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-f9qsz" [74c39cd0-1725-43d2-9631-fe893fc9fe63] Running
E1007 11:52:55.165158  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/auto-480497/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:52:55.171443  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/auto-480497/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:52:55.182819  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/auto-480497/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:52:55.204147  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/auto-480497/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:52:55.245466  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/auto-480497/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:52:55.327291  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/auto-480497/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.005838594s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-480497 exec deployment/netcat -- nslookup kubernetes.default
E1007 11:52:55.489813  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/auto-480497/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-480497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1007 11:52:55.811803  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/auto-480497/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-480497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-480497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-480497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (56.418570674s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-480497 "pgrep -a kubelet"
I1007 11:53:35.825570  896726 config.go:182] Loaded profile config "enable-default-cni-480497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-480497 replace --force -f testdata/netcat-deployment.yaml
E1007 11:53:36.180067  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/auto-480497/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mxdnm" [33aece9c-ee0f-48e4-944d-aa80c801172d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mxdnm" [33aece9c-ee0f-48e4-944d-aa80c801172d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.00449028s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-480497 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-480497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-480497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (41.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-480497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-480497 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (41.71287847s)
--- PASS: TestNetworkPlugins/group/bridge/Start (41.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-jk59r" [f9ce983a-2b66-414f-bcf1-ad27a648d330] Running
E1007 11:54:17.145475  896726 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19761-891319/.minikube/profiles/auto-480497/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.008006473s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-480497 "pgrep -a kubelet"
I1007 11:54:19.959102  896726 config.go:182] Loaded profile config "flannel-480497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-480497 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-29j66" [19ced90c-0dd1-4c2a-810b-289d6f94c209] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-29j66" [19ced90c-0dd1-4c2a-810b-289d6f94c209] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.00396421s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-480497 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-480497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-480497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-480497 "pgrep -a kubelet"
I1007 11:54:52.076211  896726 config.go:182] Loaded profile config "bridge-480497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-480497 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4zw7p" [065a2e0e-db39-454e-a59a-06a03be806bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4zw7p" [065a2e0e-db39-454e-a59a-06a03be806bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004007853s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-480497 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-480497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-480497 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (29/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-457065 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-457065" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-457065
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:785: skipping: crio not supported
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-952725 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-489583" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-489583
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-480497 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-480497

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-480497

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-480497

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-480497

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-480497

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-480497

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-480497

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-480497

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-480497

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-480497

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-480497

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-480497" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-480497" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-480497

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-480497"

                                                
                                                
----------------------- debugLogs end: kubenet-480497 [took: 4.273003916s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-480497" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-480497
--- SKIP: TestNetworkPlugins/group/kubenet (4.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-480497 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-480497

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-480497

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-480497

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-480497

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-480497

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-480497

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-480497

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-480497

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-480497

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-480497

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-480497

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-480497" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-480497

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-480497

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-480497

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-480497

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-480497" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-480497" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-480497

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-480497" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-480497"

                                                
                                                
----------------------- debugLogs end: cilium-480497 [took: 4.77071798s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-480497" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-480497
--- SKIP: TestNetworkPlugins/group/cilium (4.96s)

                                                
                                    
Copied to clipboard